ble

Results 1 - 25 of 10471Sort Results By: Published Date | Title | Company Name
Published By: DATAVERSITY     Published Date: Jul 24, 2014
Will the “programmable era” of computers be replaced by Cognitive Computing systems which can learn from interactions and reason through dynamic experience just like humans? With rapidly increasing volumes of Big Data, there is a compelling need for smarter machines to organize data faster, make better sense of it, discover insights, then learn, adapt, and improve over time without direct programming. This paper is sponsored by: Cognitive Scale.
Tags : 
data, data management, cognitive computing, machine learning, artificial intelligence, research paper
    
DATAVERSITY
Published By: DATAVERSITY     Published Date: Nov 20, 2015
The competitive advantages realized from a dependable Business Intelligence and Analytics (BI/A) are well documented. Everything from reduced business costs and increased customer retention to better decision making and the ability to forecast opportunities have been observed outcomes in response to such programs. The implementation of such a program remains a necessity for any growing or mature enterprise. The establishment of a comprehensive BI/A program that includes traditional Descriptive Analytics along with next generation categories such as Predictive or Prescriptive Analytics is indispensable for business success.
Tags : 
data, data management, analytics, business intelligence, data science
    
DATAVERSITY
Published By: Ted Hills     Published Date: Mar 08, 2017
NoSQL database management systems give us the opportunity to store our data according to more than one data storage model, but our entity-relationship data modeling notations are stuck in SQL land. Is there any need to model schema-less databases, and is it even possible? In this short white paper, Ted Hills examines these questions in light of a recent paper from MarkLogic on the hybrid data model.
Tags : 
    
Ted Hills
Published By: Ted Hills     Published Date: Mar 08, 2017
This paper explores the differences between three situations that appear on the surface to be very similar: a data attribute that may occur zero or one times, a data attribute that is optional, and a data attribute whose value may be unknown. It shows how each of these different situations is represented in Concept and Object Modeling Notation (COMN, pronounced “common”). The theory behind the analysis is explained in greater detail by three papers: Three-Valued Logic, A Systematic Solution to Handling Unknown Data in Databases, and An Approach to Representing Non-Applicable Data in Relational Databases.
Tags : 
    
Ted Hills
Published By: Ted Hills     Published Date: Mar 08, 2017
Much has been written and debated about the use of SQL NULLs to represent unknown values, and the possible use of three-valued logic. However, there has never been a systematic application of any three-valued logic to use in the logical expressions of computer programs. This paper lays the foundation for a systematic application of three-valued logic to one of the two problems inadequately addressed by SQL NULLs.
Tags : 
    
Ted Hills
Published By: Ted Hills     Published Date: Mar 08, 2017
Ever since Codd introduced so-called “null values” to the relational model, there have been debates about exactly what they mean and their proper handling in relational databases. In this paper I examine the meaning of tuples and relations containing “null values”. For the type of “null value” representing that data are not applicable, I propose an interpretation and a solution that is more rigorously defined than the SQL NULL or other similar solutions, and which can be implemented in a systematic and application-independent manner in database management systems.
Tags : 
    
Ted Hills
Published By: Innovative Systems     Published Date: Feb 21, 2019
From years of data quality initiatives, hundreds of case studies, and research by industry experts, a number of common data quality success factors have emerged. This paper discusses key characteristics of data quality initiatives and provides actionable guidelines to help make your project a success, from conception through implementation and tracking your ROI. Readers will learn how to: • Quantify the effect of poor data quality on the organization • Prioritize projects for faster ROI • Gain buy-in, from employees through senior management
Tags : 
    
Innovative Systems
Published By: Stardog Union     Published Date: Mar 13, 2019
Enterprises must transition to contextualizing their data instead of just collecting it in order to fully leverage their data as a strategic asset. Existing data management solutions such as databases and data lakes encourage data sprawl and duplication. However, true data unification can be achieved with a Knowledge Graph, which seamlessly layers on top of your existing data infrastructure to reveal the interrelationships in your data, no matter its source or format. The Knowledge Graph is also a highly scalable solution since it retains every analysis performed as a reusable asset -- drastically reducing the need for data wrangling over time. Download Knowledge Graphs 101 to learn how this technology differs from a graph database, how it compares to MDM and data lake solutions, and how to leverage artificial intelligence and machine learning within a Knowledge Graphs.
Tags : 
    
Stardog Union
Published By: Denodo     Published Date: Feb 27, 2019
Organizations continue to struggle with integrating data quickly enough to support the needs of business stakeholders, who need integrated data faster and faster with each passing day. Traditional data integration technologies have not been able to solve the fundamental problem, as they deliver data in scheduled batches, and cannot support many of today’s rich and complex data types. Data virtualization is a modern data integration approach that is already meeting today’s data integration challenges, providing the foundation for data integration in the future. Download this whitepaper to learn more about: The fundamental challenge for organizations today. Why traditional solutions fall short. Why data virtualization is the core solution.
Tags : 
    
Denodo
Published By: Tamr, Inc.     Published Date: Feb 08, 2019
Traditional data management practices, such as master data management (MDM), have been around for decades – as have the approaches vendors take in developing these capabilities. And they were well-equipped for the problem at hand: managing data at modest size and complexity. However, as enterprises mature and start to view their data assets as a source of competitive advantage, new methods to managing enterprise data become desirable. Enterprises now need approaches to data management that can solve critical issues around speed and scale in an increasingly complex data environment. This paper explores how data curation technology can be used to solve data mastering challenges at scale.
Tags : 
    
Tamr, Inc.
Published By: MEGA International     Published Date: Mar 07, 2019
The effectiveness of agile processes is often jeopardized because the architecture and organizational pre-requisites of agility are neglected. This White Paper proposes a new architecture framework, the Agile Architecture Framework (AAF), that meets the needs of the digital enterprise. It develops a vision that uniquely combines: • Methods for decomposing the system into loosely-coupled services and autonomous teams • Alignment mechanisms rooted in business strategy • Architecture patterns that leverage the latest software innovations • Results from large enterprises that started their agile-at-scale journey several ago This proposed architecture approach will enable organizations at all scales to better realize the Boundaryless Information Flow™ vision achieved through global interoperability in a secure, reliable, and timely manner.
Tags : 
    
MEGA International
Published By: DATAVERSITY     Published Date: Feb 27, 2013
In its most basic definition, unstructured data simply means any form of data that does not easily fit into a relational model or a set of database tables.
Tags : 
white paper, dataversity, unstructured data, enterprise data management, data, data management
    
DATAVERSITY
Published By: DATAVERSITY     Published Date: Jun 14, 2013
This report analyzes many challenges faced when beginning a new Data Governance program, and outlines many crucial elements in successfully executing such a program. “Data Governance” is a term fraught with nuance, misunderstanding, myriad opinions, and fear. It is often enough to keep Data Stewards and senior executives awake late into the night. The modern enterprise needs reliable and sustainable control over its technological systems, business processes, and data assets. Such control is tantamount to competitive success in an ever-changing marketplace driven by the exponential growth of data, mobile computing, social networking, the need for real-time analytics and reporting mechanisms, and increasing regulatory compliance requirements. Data Governance can enhance and buttress (or resuscitate, if needed) the strategic and tactical business drivers every enterprise needs for market success. This paper is sponsored by: ASG, DGPO and DebTech International.
Tags : 
data, data management, data governance, data steward, dataversity, research paper
    
DATAVERSITY
Published By: Databricks     Published Date: Sep 13, 2018
Learn how to get started with Apache Spark™ Apache Spark™’s ability to speed analytic applications by orders of magnitude, its versatility, and ease of use are quickly winning the market. With Spark’s appeal to developers, end users, and integrators to solve complex data problems at scale, it is now the most active open source project with the big data community. With rapid adoption by enterprises across a wide range of industries, Spark has been deployed at massive scale, collectively processing multiple petabytes of data on clusters of over 8,000 nodes. If you are a developer or data scientist interested in big data, learn how Spark may be the tool for you. Databricks is happy to present this ebook as a practical introduction to Spark. Download this ebook to learn: • Spark’s basic architecture • Why Spark is a popular choice for data analytics • What tools and features are available • How to get started right away through interactive sample code
Tags : 
    
Databricks
Published By: Stardog Union     Published Date: Jul 27, 2018
When enterprises consider the benefits of data analysis, what's often overlooked is the challenge of data variety, and that most successful outcomes are driven by it. Yet businesses are still struggling with how to query distributed, heterogeneous data using a unified data model. Fortunately, Knowledge Graphs provide a schema flexible solution based on modular, extensible data models that evolve over time to create a truly unified solution. How is this possible? Download and discover: • Why businesses should organize information using nodes and edges instead of rows, columns and tables • Why schema free and schema rigid solutions eventually prove to be impractical • The three categories of data diversity including semantic and structural variety
Tags : 
    
Stardog Union
Published By: Adlib Software     Published Date: Feb 09, 2018
In financial services, contracts often exist in highly dispersed formats, with jurisdictional complexities and risk aspects that change over time—resulting in high levels of risk. Learn how to transform contracts into actionable, defensible assets with a robust contract intelligence solution.
Tags : 
    
Adlib Software
Published By: Couchbase     Published Date: Jul 15, 2013
NoSQL database technology is increasingly chosen as viable alternative to relational databases, particularly for interactive web applications. Developers accustomed to the RDBMS structure and data models need to change their approach when transitioning to NoSQL. Download this white paper to learn about the main challenges that motivates the need for NoSQL, the differences between relational databases and distributed document-oriented databases, the key steps to perform document modeling in NoSQL databases, and how to handle concurrency, scaling and multiple-place updates in a non-relational database.
Tags : 
white paper, database, nosql, couchbase
    
Couchbase
Published By: Couchbase     Published Date: Dec 04, 2014
Interactive applications have changed dramatically over the last 15 years. In the late ‘90s, large web companies emerged with dramatic increases in scale on many dimensions: · The number of concurrent users skyrocketed as applications increasingly became accessible · via the web (and later on mobile devices). · The amount of data collected and processed soared as it became easier and increasingly · valuable to capture all kinds of data. · The amount of unstructured or semi-structured data exploded and its use became integral · to the value and richness of applications. Dealing with these issues was more and more difficult using relational database technology. The key reason is that relational databases are essentially architected to run a single machine and use a rigid, schema-based approach to modeling data. Google, Amazon, Facebook, and LinkedIn were among the first companies to discover the serious limitations of relational database technology for supporting these new application requirements. Commercial alternatives didn’t exist, so they invented new data management approaches themselves. Their pioneering work generated tremendous interest because a growing number of companies faced similar problems. Open source NoSQL database projects formed to leverage the work of the pioneers, and commercial companies associated with these projects soon followed. Today, the use of NoSQL technology is rising rapidly among Internet companies and the enterprise. It’s increasingly considered a viable alternative to relational databases, especially as more organizations recognize that operating at scale is more effectively achieved running on clusters of standard, commodity servers, and a schema-less data model is often a better approach for handling the variety and type of data most often captured and processed today.
Tags : 
database, nosql, data, data management, white paper, why nosql, couchbase
    
Couchbase
Published By: Adaptive     Published Date: May 10, 2017
Enterprise metadata management and data quality management are two important pillars of successful enterprise data management for any organization. A well implemented enterprise metadata management platform can enable a successful data quality management at the enterprise level. This paper describes in detail an approach to integrate data quality and metadata management leveraging the Adaptive Metadata Manager platform. It explains the various levels of integrations and the benefits associated with each.
Tags : 
    
Adaptive
Published By: Embarcadero     Published Date: Oct 21, 2014
Metadata defines the structure of data in files and databases, providing detailed information about entities and objects. In this white paper, Dr. Robin Bloor and Rebecca Jowiak of The Bloor Group discuss the value of metadata and the importance of organizing it well, which enables you to: - Collaborate on metadata across your organization - Manage disparate data sources and definitions - Establish an enterprise glossary of business definitions and data elements - Improve communication between teams
Tags : 
data, data management, enterprise data management, enterprise information management, metadata, robin bloor, rebecca jozwiak, embarcadero
    
Embarcadero
Published By: Embarcadero     Published Date: Jul 23, 2015
Whether you’re working with relational data, schema-less (NoSQL) data, or model metadata, you need a data architecture that can actively leverage information assets for business value. The most valuable data has high quality, business context, and visibility across the organization. Check out this must-read eBook for essential insights on important data architecture topics.
Tags : 
    
Embarcadero
Published By: MarkLogic     Published Date: Aug 04, 2014
The Age of Information and the associated growth of the World Wide Web has brought with it a new problem: how to actually make sense of all the information available. The overarching goal of the Semantic Web is to change that. Semantic Web technologies accomplish this goal by providing a universal framework to describe and link data so that it can be better understood and searched holistically, allowing both people and computers to see and discover relationships in the data. Today, organizations are leveraging the power of the Semantic Web to aggregate and link disparate data, improve search navigation, provide holistic search and discovery, dynamically publish content, and complete ETL processes faster. Read this white paper to gain insight into why Semantics is important, understand how Semantics works, and see examples of Semantics in practice.
Tags : 
data, data management, whitepaper, marklogic, semantic, semantic technology, nosql, database, semantic web, big data
    
MarkLogic
Published By: MarkLogic     Published Date: Jun 17, 2015
Modern enterprises face increasing pressure to deliver business value through technological innovation that leverages all available data. At the same time, those enterprises need to reduce expenses to stay competitive, deliver results faster to respond to market demands, use real-time analytics so users can make informed decisions, and develop new applications with enhanced developer productivity. All of these factors put big data at the top of the agenda. Unfortunately, the promise of big data has often failed to deliver. With the growing volumes of unstructured and multi-structured data flooding into our data centers, the relational databases that enterprises have relied on for the last 40-years are now too limiting and inflexible. New-generation NoSQL (“Not Only SQL”) databases have gained popularity because they are ideally suited to deal with the volume, velocity, and variety of data that businesses and governments handle today.
Tags : 
data, data management, databse, marklogic, column store, wide column store, nosql
    
MarkLogic
Published By: TopQuadrant     Published Date: Mar 21, 2015
Data management is becoming more and more central to the business model of enterprises. The time when data was looked at as little more than the byproduct of automation is long gone, and today we see enterprises vigorously engaged in trying to unlock maximum value from their data, even to the extent of directly monetizing it. Yet, many of these efforts are hampered by immature data governance and management practices stemming from a legacy that did not pay much attention to data. Part of this problem is a failure to understand that there are different types of data, and each type of data has its own special characteristics, challenges and concerns. Reference data is a special type of data. It is essentially codes whose basic job is to turn other data into meaningful business information and to provide an informational context for the wider world in which the enterprise functions. This paper discusses the challenges associated with implementing a reference data management solution and the essential components of any vision for the governance and management of reference data. It covers the following topics in some detail: · What is reference data? · Why is reference data management important? · What are the challenges of reference data management? · What are some best practices for the governance and management of reference data? · What capabilities should you look for in a reference data solution?
Tags : 
data management, data, reference data, reference data management, top quadrant, malcolm chisholm
    
TopQuadrant
Published By: TopQuadrant     Published Date: Jun 01, 2017
This paper presents a practitioner informed roadmap intended to assist enterprises in maturing their Enterprise Information Management (EIM) practices, with a specific focus on improving Reference Data Management (RDM). Reference data is found in every application used by an enterprise including back-end systems, front-end commerce applications, data exchange formats, and in outsourced, hosted systems, big data platforms, and data warehouses. It can easily be 20–50% of the tables in a data store. And the values are used throughout the transactional and mastered data sets to make the system internally consistent.
Tags : 
    
TopQuadrant
Start   Previous   1 2 3 4 5 6 7 8 9 10 11 12 13 14 15    Next    End
Search      

Add Research

Get your company's research in the hands of targeted business professionals.

We use technologies such as cookies to understand how you use our site and to provide a better user experience. This includes personalizing content, using analytics and improving site operations. We may share your information about your use of our site with third parties in accordance with our Privacy Policy. You can change your cookie settings as described here at any time, but parts of our site may not function correctly without them. By continuing to use our site, you agree that we can save cookies on your device, unless you have disabled cookies.
I Accept