ami

Results 1 - 25 of 2532Sort Results By: Published Date | Title | Company Name
Published By: DATAVERSITY     Published Date: Jul 24, 2014
Will the “programmable era” of computers be replaced by Cognitive Computing systems which can learn from interactions and reason through dynamic experience just like humans? With rapidly increasing volumes of Big Data, there is a compelling need for smarter machines to organize data faster, make better sense of it, discover insights, then learn, adapt, and improve over time without direct programming. This paper is sponsored by: Cognitive Scale.
Tags : 
data, data management, cognitive computing, machine learning, artificial intelligence, research paper
    
DATAVERSITY
Published By: DATAVERSITY     Published Date: Jul 06, 2015
The growth of NoSQL data storage solutions have revolutionized the way enterprises are dealing with their data. The older, relational platforms are still being utilized by most organizations, while the implementation of varying NoSQL platforms including Key-Value, Wide Column, Document, Graph, and Hybrid data stores are increasing at faster rates than ever seen before. Such implementations are causing enterprises to revise their Data Management procedures across-the-board from governance to analytics, metadata management to software development, data modeling to regulation and compliance. The time-honored techniques for data modeling are being rewritten, reworked, and modified in a multitude of different ways, often wholly dependent on the NoSQL platform under development. The research report analyzes a 2015 DATAVERSITY® survey titled “Modeling NoSQL.” The survey examined a number of crucial issues within the NoSQL world today, with focus on data modeling in particular.
Tags : 
    
DATAVERSITY
Published By: DATAVERSITY     Published Date: Oct 11, 2018
The foundation of this report is a survey conducted by DATAVERSITY® that included a range of different question types and topics on the current state of Data Governance and Data Stewardship. The report evaluates the topic through a discussion and analysis of each presented survey question, as well as a deeper examination of the present and future trends.
Tags : 
    
DATAVERSITY
Published By: Ted Hills     Published Date: Mar 08, 2017
NoSQL database management systems give us the opportunity to store our data according to more than one data storage model, but our entity-relationship data modeling notations are stuck in SQL land. Is there any need to model schema-less databases, and is it even possible? In this short white paper, Ted Hills examines these questions in light of a recent paper from MarkLogic on the hybrid data model.
Tags : 
    
Ted Hills
Published By: Ted Hills     Published Date: Mar 08, 2017
Ever since Codd introduced so-called “null values” to the relational model, there have been debates about exactly what they mean and their proper handling in relational databases. In this paper I examine the meaning of tuples and relations containing “null values”. For the type of “null value” representing unknown data, I propose an interpretation and a solution that is more rigorously defined than the SQL NULL or other similar solutions, and which can be implemented in a systematic and application-independent manner in database management systems.
Tags : 
    
Ted Hills
Published By: Ted Hills     Published Date: Mar 08, 2017
Ever since Codd introduced so-called “null values” to the relational model, there have been debates about exactly what they mean and their proper handling in relational databases. In this paper I examine the meaning of tuples and relations containing “null values”. For the type of “null value” representing that data are not applicable, I propose an interpretation and a solution that is more rigorously defined than the SQL NULL or other similar solutions, and which can be implemented in a systematic and application-independent manner in database management systems.
Tags : 
    
Ted Hills
Published By: DATAVERSITY     Published Date: May 25, 2014
Deconstructing NoSQL: Analysis of a 2013 Survey on the Use, Production, and Assessment of NoSQL Technologies in the Enterprise This report examines the non-relational database environment from the viewpoints of those within the industry–whether current or future adopters, consultants, developers, business analysts, vendors, or others. This paper is sponsored by: MarkLogic, Cloudant and Neo4j.
Tags : 
research paper, analysis, nosql, database, nosql database, white paper, nosql white paper
    
DATAVERSITY
Published By: DATAVERSITY     Published Date: Oct 12, 2017
The foundation of this report is a survey conducted by DATAVERSITY® that included a range of different question types and topics on the current state of Data Architecture. The report evaluates the topic through a discussion and analysis of each presented survey question, as well as a deeper examination of the present and future trends.
Tags : 
data architecture, data, data management
    
DATAVERSITY
Published By: MarkLogic     Published Date: Aug 04, 2014
The Age of Information and the associated growth of the World Wide Web has brought with it a new problem: how to actually make sense of all the information available. The overarching goal of the Semantic Web is to change that. Semantic Web technologies accomplish this goal by providing a universal framework to describe and link data so that it can be better understood and searched holistically, allowing both people and computers to see and discover relationships in the data. Today, organizations are leveraging the power of the Semantic Web to aggregate and link disparate data, improve search navigation, provide holistic search and discovery, dynamically publish content, and complete ETL processes faster. Read this white paper to gain insight into why Semantics is important, understand how Semantics works, and see examples of Semantics in practice.
Tags : 
data, data management, whitepaper, marklogic, semantic, semantic technology, nosql, database, semantic web, big data
    
MarkLogic
Published By: MapR Technologies     Published Date: Aug 01, 2018
How do you get a machine learning system to deliver value from big data? Turns out that 90% of the effort required for success in machine learning is not the algorithm or the model or the learning - it's the logistics. Ted Dunning and Ellen Friedman identify what matters in machine learning logistics, what challenges arise, especially in a production setting, and they introduce an innovative solution: the rendezvous architecture. This new design for model management is based on a streaming approach in a microservices style. Rendezvous addresses the need to preserve and share raw data, to do effective model-to-model comparisons and to have new models on standby, ready for a hot hand-off when a production model needs to be replaced.
Tags : 
    
MapR Technologies
Published By: CapTech     Published Date: May 26, 2015
Big Data is the future of business. According to CloudTweaks.com, as much as 2.5 quintillion bytes of data are produced each day, with most of this data being captured by Big Data. With its ability to transfer all data sources all into one centralized place, Big Data provides opportunities, clearer visions, customer conversations and transactions. However, with the dazzling big promise of Big Data comes a potentially huge letdown. If this vast pool of information resources is not accessible or usable, it becomes useless. This paper examines strategies for building the most value into your Big Data system by enabling process controls to effectively mine, access and secure Big Data.
Tags : 
big data, captech, data, data management, nosql
    
CapTech
Published By: VoltDB     Published Date: Jul 09, 2015
What is fast data? It's data in motion, and it creates Big Data. But handling it requires a radically different approach. Download the Fast Data Stack white paper from VoltDB. Learn how to build fast data applications with an in-memory solution that’s powerful enough for real-time stateful operations.
Tags : 
data, data management, data stack, bug data, voltdb, database, nosql
    
VoltDB
Published By: VoltDB     Published Date: Feb 12, 2016
The need for fast data applications is growing rapidly, driven by the IoT, the surge in machine-to-machine (M2M) data, global mobile device proliferation, and the monetization of SaaS platforms. So how do you combine real-time, streaming analytics with real-time decisions in an architecture that’s reliable, scalable, and simple? In this report, Ryan Betts and John Hugg from VoltDB examine ways to develop apps for fast data, using pre-defined patterns. These patterns are general enough to suit both the do-it-yourself, hybrid batch/streaming approach, as well as the simpler, proven in-memory approach available with certain fast database offerings.
Tags : 
    
VoltDB
Published By: Ontotext     Published Date: Dec 21, 2015
Learn how semantic technologies make any content intelligent and turn it into revenue for your publishing business There is a smarter, cost-effective way for publishers to create, maintain and reuse content assets with higher accuracy. It is called dynamic semantic publishing. Putting Semantic Technologies at Work for the Publishing Industry An efficient blend of semantic technologies, dynamic semantic publishing enables powerful experiences when it comes to publishers’ main stock of trade: processing and representing information.
Tags : 
    
Ontotext
Published By: Silwood Technology     Published Date: Nov 28, 2016
Business functions in large organizations are usually handled by software application packages. Some of the most well-known of these are from SAP, Oracle, Salesforce and Microsoft. These packages all store their data in a database. Often however it is necessary to use that data with other IT projects. In this instance being able to understand the metadata that defines these databases is critical. The challenge is that their metadata is complex, opaque and difficult to access. This paper describes how the top application packages store and use their own metadata. It explores the importance of understanding that metadata and examines the obstacles in getting at that metadata in a timely and effective manner.
Tags : 
    
Silwood Technology
Published By: Silwood Technology     Published Date: Mar 21, 2017
Business functions in large organizations are usually handled by software application packages. Some of the most well-known of these are from SAP, Oracle, Salesforce and Microsoft. These packages all store their data in a database. Often however it is necessary to use that data with other IT projects. In this instance being able to understand the metadata that defines these databases is critical. The challenge is that their metadata is complex, opaque and difficult to access. This paper describes how the top application packages store and use their own metadata. It explores the importance of understanding that metadata and examines the obstacles in getting at that metadata in a timely and effective manner.
Tags : 
    
Silwood Technology
Published By: Data Ninja     Published Date: Apr 16, 2017
By adding structure to free text using text analytics and graph databases, text becomes valuable business data. This paper examines a real life use case in risk analysis. Text is a part of all communication channels from social media, documents, logs, and data bases. In order to use the information from text, you need to extract the data in a way that provides useful information on entities, locations, organizations, and their properties. Graph databases are very powerful in showing the text relationships including the nearest neighbors, clusters, and the shortest paths. The combination of text analytics and graph databases can be used to solve business problems.
Tags : 
    
Data Ninja
Published By: AtomRain     Published Date: Nov 07, 2017
The world is more connected than ever before, and data relationships only continue to multiply. Yet enterprises still operate largely with an incomplete perspective caused by segmented, non-contextual and disconnected data silos. Connected data is the key to surviving, growing and thriving. However, a transformation across the entire enterprise won’t happen overnight, and each step must be measurable from both a business and technical perspective. Organizations need expert guidance to move more swiftly and avoid costly technical pitfalls in the new paradigm. This paper examines the journey to what we call, “The Connected Enterprise”.
Tags : 
    
AtomRain
Published By: Octopai     Published Date: Sep 01, 2018
For many BI professionals, every task can feel like MISSION IMPOSSIBLE. All the manual mapping required to sort out inconsistencies in data and the lack of tools to simplify and shorten the process of finding and understanding data leaves BI groups frustrated and slows down the business. This whitepaper examines the revolutionary impact of automation on the cumbersome manual processes that have been dragging BI down for so long. • Data correction vs process correction • Root-cause analysis with data lineage: reverse-tracing the data flow • Data quality rules and data controls • Automated data lineage mapping
Tags : 
    
Octopai
Published By: Erwin     Published Date: Jun 12, 2018
A strong data governance foundation underpins data security and privacy. Find out how to connect the dots across the data trinity – governance, security and privacy – and to act accordingly.
Tags : 
    
Erwin
Published By: TD Bank Group     Published Date: Aug 10, 2018
This paper examines whether blockchain distributed ledger technology could improve the management of trusted information, specifically considering data quality. Improvement was determined by considering the impact of a distributed ledger as an authoritative source in TD Bank Group's Enterprise Data Quality Management Process versus the use of standard authoritative sources such as databases and files. Distributed ledger technology is not expected, or proven, to result in a change in the Data Quality Management process. Our analysis focused on execution advantages possible due to distributed ledger properties that make it an attractive resource for data quality management (DQM).
Tags : 
    
TD Bank Group
Published By: Attunity     Published Date: Sep 21, 2018
Apache NiFi is an easy to use, powerful, and reliable system to process and distribute data. It provides an end-to-end platform that can collect, curate, analyze, and act on data in real-time, on-premises, or in the cloud with a drag-and-drop visual interface. This book offers you an overview of NiFi along with common use cases to help you get started, debug, and manage your own dataflows.
Tags : 
    
Attunity
Published By: Attunity     Published Date: Oct 19, 2018
Change data capture (CDC) technology can modernize your data and analytics environment with scalable, efficient and real-time data replication that does not impact production systems. To realize these benefits, enterprises need to understand how this critical technology works, why it’s needed, and what their Fortune 500 peers have learned from their CDC implementations. This book serves as a practical guide for enterprise architects, data managers and CIOs as they enable modern data lake, streaming and cloud architectures with CDC. Read this book to understand: ? The rise of data lake, streaming and cloud platforms ? How CDC works and enables these architectures ? Case studies of leading-edge enterprises ? Planning and implementation approaches
Tags : 
    
Attunity
Published By: TopQuadrant     Published Date: Jul 18, 2016
With information streaming in from more varied sources and at a faster pace than ever before, organizations are having an increasingly difficult time deriving accurate meaning from their data. Data governance systems that were once able to organize and process enterprise information are becoming too slow and limited.   Semantic information management makes it easier to reconcile data from different sources by compiling and organizing information about that data, its metadata. By connecting all kinds of data and metadata in a more accessible way, semantic information systems empower users, data stewards and analysts to unlock and use the true meaning and value of their organization’s data.     Learn more about the challenges in the evolving data landscape and how a semantic approach can help.
Tags : 
    
TopQuadrant
Published By: TopQuadrant     Published Date: Aug 01, 2016
With information streaming in from more varied sources and at a faster pace than ever before, organizations are having an increasingly difficult time deriving accurate meaning from their data. Data governance systems that were once able to organize and process enterprise information are becoming too slow and limited. Semantic information management makes it easier to reconcile data from different sources by compiling and organizing information about that data, its metadata. By connecting all kinds of data and metadata in a more accessible way, semantic information systems empower users, data stewards and analysts to unlock and use the true meaning and value of their organization’s data. Learn more about the challenges in the evolving data landscape and how a semantic approach can help.
Tags : 
    
TopQuadrant
Start   Previous   1 2 3 4 5 6 7 8 9 10 11 12 13 14 15    Next    End
Search      

Add Research

Get your company's research in the hands of targeted business professionals.

We use technologies such as cookies to understand how you use our site and to provide a better user experience. This includes personalizing content, using analytics and improving site operations. We may share your information about your use of our site with third parties in accordance with our Privacy Policy. You can change your cookie settings as described here at any time, but parts of our site may not function correctly without them. By continuing to use our site, you agree that we can save cookies on your device, unless you have disabled cookies.
I Accept