system

Results 1 - 25 of 5818Sort Results By: Published Date | Title | Company Name
Published By: DATAVERSITY     Published Date: Jul 24, 2014
Will the “programmable era” of computers be replaced by Cognitive Computing systems which can learn from interactions and reason through dynamic experience just like humans? With rapidly increasing volumes of Big Data, there is a compelling need for smarter machines to organize data faster, make better sense of it, discover insights, then learn, adapt, and improve over time without direct programming. This paper is sponsored by: Cognitive Scale.
Tags : 
data, data management, cognitive computing, machine learning, artificial intelligence, research paper
    
DATAVERSITY
Published By: Ted Hills     Published Date: Mar 08, 2017
NoSQL database management systems give us the opportunity to store our data according to more than one data storage model, but our entity-relationship data modeling notations are stuck in SQL land. Is there any need to model schema-less databases, and is it even possible? In this short white paper, Ted Hills examines these questions in light of a recent paper from MarkLogic on the hybrid data model.
Tags : 
    
Ted Hills
Published By: Ted Hills     Published Date: Mar 08, 2017
This paper explores the differences between three situations that appear on the surface to be very similar: a data attribute that may occur zero or one times, a data attribute that is optional, and a data attribute whose value may be unknown. It shows how each of these different situations is represented in Concept and Object Modeling Notation (COMN, pronounced “common”). The theory behind the analysis is explained in greater detail by three papers: Three-Valued Logic, A Systematic Solution to Handling Unknown Data in Databases, and An Approach to Representing Non-Applicable Data in Relational Databases.
Tags : 
    
Ted Hills
Published By: Ted Hills     Published Date: Mar 08, 2017
Much has been written and debated about the use of SQL NULLs to represent unknown values, and the possible use of three-valued logic. However, there has never been a systematic application of any three-valued logic to use in the logical expressions of computer programs. This paper lays the foundation for a systematic application of three-valued logic to one of the two problems inadequately addressed by SQL NULLs.
Tags : 
    
Ted Hills
Published By: Ted Hills     Published Date: Mar 08, 2017
Ever since Codd introduced so-called “null values” to the relational model, there have been debates about exactly what they mean and their proper handling in relational databases. In this paper I examine the meaning of tuples and relations containing “null values”. For the type of “null value” representing unknown data, I propose an interpretation and a solution that is more rigorously defined than the SQL NULL or other similar solutions, and which can be implemented in a systematic and application-independent manner in database management systems.
Tags : 
    
Ted Hills
Published By: Ted Hills     Published Date: Mar 08, 2017
Ever since Codd introduced so-called “null values” to the relational model, there have been debates about exactly what they mean and their proper handling in relational databases. In this paper I examine the meaning of tuples and relations containing “null values”. For the type of “null value” representing that data are not applicable, I propose an interpretation and a solution that is more rigorously defined than the SQL NULL or other similar solutions, and which can be implemented in a systematic and application-independent manner in database management systems.
Tags : 
    
Ted Hills
Published By: Wave Computing     Published Date: Jul 06, 2018
This paper argues a case for the use of coarse grained reconfigurable array (CGRA) architectures for the efficient acceleration of the data flow computations used in deep neural network training and inferencing. The paper discusses the problems with other parallel acceleration systems such as massively parallel processor arrays (MPPAs) and heterogeneous systems based on CUDA and OpenCL, and proposes that CGRAs with autonomous computing features deliver improved performance and computational efficiency. The machine learning compute appliance that Wave Computing is developing executes data flow graphs using multiple clock-less, CGRA-based System on Chips (SoCs) each containing 16,000 processing elements (PEs). This paper describes the tools needed for efficient compilation of data flow graphs to the CGRA architecture, and outlines Wave Computing’s WaveFlow software (SW) framework for the online mapping of models from popular workflows like Tensorflow, MXNet and Caffe.
Tags : 
    
Wave Computing
Published By: Attunity     Published Date: Sep 21, 2018
Apache NiFi is an easy to use, powerful, and reliable system to process and distribute data. It provides an end-to-end platform that can collect, curate, analyze, and act on data in real-time, on-premises, or in the cloud with a drag-and-drop visual interface. This book offers you an overview of NiFi along with common use cases to help you get started, debug, and manage your own dataflows.
Tags : 
    
Attunity
Published By: Attunity     Published Date: Oct 19, 2018
Change data capture (CDC) technology can modernize your data and analytics environment with scalable, efficient and real-time data replication that does not impact production systems. To realize these benefits, enterprises need to understand how this critical technology works, why it’s needed, and what their Fortune 500 peers have learned from their CDC implementations. This book serves as a practical guide for enterprise architects, data managers and CIOs as they enable modern data lake, streaming and cloud architectures with CDC. Read this book to understand: ? The rise of data lake, streaming and cloud platforms ? How CDC works and enables these architectures ? Case studies of leading-edge enterprises ? Planning and implementation approaches
Tags : 
    
Attunity
Published By: graphgrid     Published Date: Oct 02, 2018
Whether it’s for a specific application, optimizing your existing operations, or innovating new customer services, graph databases are a powerful technology that turn accessing and analyzing your data into a competitive advantage. Graph databases resolve the Big Data limitations and free up data architects and developers to build amazing solutions that predict behaviors, enable data driven decisions and make insightful recommendations. Yet just as cars aren’t functional with only engines, graph databases require surrounding capabilities including ingesting multi-source data, building data models that are unique to your business needs, ease of data interaction and visualization, seamless co-existence with legacy systems, high performance search capabilities, and integration of data analysis applications. Collectively, this comprehensive data platform turns graph capabilities into tangible insights that drive your business forward.
Tags : 
    
graphgrid
Published By: DATAVERSITY     Published Date: Jun 14, 2013
This report analyzes many challenges faced when beginning a new Data Governance program, and outlines many crucial elements in successfully executing such a program. “Data Governance” is a term fraught with nuance, misunderstanding, myriad opinions, and fear. It is often enough to keep Data Stewards and senior executives awake late into the night. The modern enterprise needs reliable and sustainable control over its technological systems, business processes, and data assets. Such control is tantamount to competitive success in an ever-changing marketplace driven by the exponential growth of data, mobile computing, social networking, the need for real-time analytics and reporting mechanisms, and increasing regulatory compliance requirements. Data Governance can enhance and buttress (or resuscitate, if needed) the strategic and tactical business drivers every enterprise needs for market success. This paper is sponsored by: ASG, DGPO and DebTech International.
Tags : 
data, data management, data governance, data steward, dataversity, research paper
    
DATAVERSITY
Published By: DATAVERSITY     Published Date: Dec 27, 2013
There are actually many elements of such a vision that are working together. ACID and NoSQL are not the antagonists they were once thought to be; NoSQL works well under a BASE model, but also some of the innovative NoSQL systems fully conform to ACID requirements. Database engineers have puzzled out how to get non-relational systems to work within an environment that demands high availability, scalability, with differing levels of recovery and partition tolerance. BASE is still a leading innovation that is wedded to the NoSQL model, and the evolution of both together is harmonious. But that doesn’t mean they always have to be in partnership; there are several options. So while the opening anecdote is true in many cases, organizations that need more diverse possibilities can move into the commercial arena and get the specific option that works best for them. This paper is sponsored by: MarkLogic.
Tags : 
nosql, database, acid v base, white paper
    
DATAVERSITY
Published By: TopQuadrant     Published Date: Jun 01, 2017
This paper presents a practitioner informed roadmap intended to assist enterprises in maturing their Enterprise Information Management (EIM) practices, with a specific focus on improving Reference Data Management (RDM). Reference data is found in every application used by an enterprise including back-end systems, front-end commerce applications, data exchange formats, and in outsourced, hosted systems, big data platforms, and data warehouses. It can easily be 20–50% of the tables in a data store. And the values are used throughout the transactional and mastered data sets to make the system internally consistent.
Tags : 
    
TopQuadrant
Published By: TopQuadrant     Published Date: Jun 11, 2018
Data governance is a lifecycle-centric asset management activity. To understand and realize the value of data assets, it is necessary to capture information about them (their metadata) in the connected way. Capturing the meaning and context of diverse enterprise data in connection to all assets in the enterprise ecosystem is foundational to effective data governance. Therefore, a data governance environment must represent assets and their role in the enterprise using an open, extensible and “smart” approach. Knowledge graphs are the most viable and powerful way to do this. This short paper outlines how knowledge graphs are flexible, evolvable, semantic and intelligent. It is these characteristics that enable them to: • capture the description of data as an interconnected set of information that meaningfully bridges enterprise metadata silos. • deliver integrated data governance by addressing all three aspects of data governance — Executive Governance, Representative Governance, and App
Tags : 
    
TopQuadrant
Published By: MapR Technologies     Published Date: Mar 29, 2016
Add Big Data Technologies to Get More Value from Your Stack Taking advantage of big data starts with understanding how to optimize and augment your existing infrastructure. Relational databases have endured for a reason – they fit well with the types of data that organizations use to run their business. These types of data in business applications such as ERP, CRM, EPM, etc., are not fundamentally changing, which suggests that relational databases will continue to play a foundational role in enterprise architectures for the foreseeable future. One area where emerging technologies can complement relational database technologies is big data. With the rapidly growing volumes of data, along with the many new sources of data, organizations look for ways to relieve pressure from their existing systems. That’s where Hadoop and NoSQL come in.
Tags : 
    
MapR Technologies
Published By: MapR Technologies     Published Date: Aug 01, 2018
How do you get a machine learning system to deliver value from big data? Turns out that 90% of the effort required for success in machine learning is not the algorithm or the model or the learning - it's the logistics. Ted Dunning and Ellen Friedman identify what matters in machine learning logistics, what challenges arise, especially in a production setting, and they introduce an innovative solution: the rendezvous architecture. This new design for model management is based on a streaming approach in a microservices style. Rendezvous addresses the need to preserve and share raw data, to do effective model-to-model comparisons and to have new models on standby, ready for a hot hand-off when a production model needs to be replaced.
Tags : 
    
MapR Technologies
Published By: Cambridge Semantics     Published Date: May 11, 2016
With the explosive growth of Big Data, IT professionals find their time and resources squeezed between managing increasingly large and diverse siloed data stores and increased user demands for timely, accurate data. The graph-based ANZO Smart Data Manager is built to relieve these burdens by automating the process of managing, cataloging and governing data at enterprise scale and security. Anzo Smart Data Manager allows companies to truly understand their data ecosystems and leverage the metadata within it.
Tags : 
    
Cambridge Semantics
Published By: Cloudant - an IBM Company     Published Date: Jun 01, 2015
Whether you're a DBA, data scientist or developer, you're probably considering how the cloud can help modernize your information management and analytics strategy. Cloud data warehousing can help you get more value from your data by combining the benefits of the cloud - speed, scale, and agility - with the simplicity and performance of traditional on-premises appliances. This white paper explores how a cloud data warehouse like IBM dashDB can reduce costs and deliver new business insights. Readers will learn about: - How data warehousing-as-a-service helps you scale without incurring extra costs - The benefits of in-database analytics in a cloud data warehouse - How a cloud data warehouse can integrate with the larger ecosystem of business intelligence tools, both on prem and off prem
Tags : 
nosql, ibm, dashdb, database, cloud
    
Cloudant - an IBM Company
Published By: Aerospike     Published Date: Jul 17, 2014
This whitepaper defines provides a brief technical overview of ACID support in Aerospike. It includes a definition of ACID (Atomicity, Consistency, Isolation, Durability), and an overview of the CAP Theorem, which postulates that only two of the three properties of consistency, availability, and partition tolerance can be guaranteed in a distributed system at a specific time. Although Aerospike is an AP system with a proven track record of 100% uptime, this paper describes Aerospike's unique approach to avoiding network partitioning to also ensure high consistency. In addition, the paper describes how Aerospike will give users the option to support a CP configuration with complete consistency in the presence of networking partitioning by reducing availability.
Tags : 
whitepaper, data, data management, nosql, aerospike, acid
    
Aerospike
Published By: Pentaho     Published Date: Nov 03, 2016
While a whole ecosystem of tools has sprung up around Hadoop to handle and analyze data, many of them are specialized to just one part of a larger process. In order to fulfill the promise of Hadoop, organizations need to step back and take an end-to-end view of their analytic data pipelines.
Tags : 
    
Pentaho
Published By: Skytree     Published Date: Nov 23, 2014
Critical business information is often in the form of unstructured and semi-structured data that can be hard or impossible to interpret with legacy systems. In this brief, discover how you can use machine learning to analyze both unstructured text data and semi- structured log data, providing you with the insights needed to achieve your business goals.
Tags : 
log data, machine learning, natural language, nlp, natural language processing, skytree, unstructured data, semi-structured data, data analysis
    
Skytree
Published By: AnalytixDS     Published Date: Feb 28, 2015
With future business intelligence solutions clearly evolving from data that comes from highly efficient and well behaved systems, to data that comes from the extended enterprise where data is not necessarily so well structured and behaved - Organizations are forced into a more collaborative mode of operation with their core infrastructure being adapted from the consumer space, and to the extent possible, conformed to their existing repositories. This whitepaper attempts to address various challenges consumers face while managing enormous data sets within the context of this complex scenario. Further, we’ll try to answer the question: Is Big Data Governance really that different from traditional data governance initiatives? Finally, we’ll see how AnalytiX™ Mapping Manager™ can help organizations accelerate the development and deployment of a successful Big Data/ Business Intelligence platform and accelerate delivery of all sorts of data – structured, semi-structured as well as unstruc
Tags : 
big data, big data governance, data governance, analytixds
    
AnalytixDS
Published By: Expert System     Published Date: Mar 19, 2015
Establishing context and knowledge capture In today’s knowledge-infused world, it is vitally important for organizations of any size to deploy an intuitive knowledge platform that enables delivery of the right information at the right time, in a way that is useful and helpful. Semantic technology processes content for meaning, allowing for the ability to understand words in context: it allows for better content processing and interpretation, therefore enabling content organization and navigation, which in turn increases findability.
Tags : 
enterprise data management, unstructured data, semantic technology, expert system
    
Expert System
Published By: CapTech     Published Date: May 26, 2015
Big Data is the future of business. According to CloudTweaks.com, as much as 2.5 quintillion bytes of data are produced each day, with most of this data being captured by Big Data. With its ability to transfer all data sources all into one centralized place, Big Data provides opportunities, clearer visions, customer conversations and transactions. However, with the dazzling big promise of Big Data comes a potentially huge letdown. If this vast pool of information resources is not accessible or usable, it becomes useless. This paper examines strategies for building the most value into your Big Data system by enabling process controls to effectively mine, access and secure Big Data.
Tags : 
big data, captech, data, data management, nosql
    
CapTech
Published By: GBG Loqate     Published Date: Jul 09, 2015
Businesses are vulnerable when they assume that their data is accurate, because they are almost always losing money without their knowledge. When it comes to data quality, the problems that you don’t suspect are often worse and more pervasive than the ones you are aware of. Addresses are subject to their own specific set of rules. Detecting and correcting address errors is a complex problem, and one that can only be solved with specialized software.
Tags : 
data, data management, data quality, loqate
    
GBG Loqate
Start   Previous   1 2 3 4 5 6 7 8 9 10 11 12 13 14 15    Next    End
Search      

Add Research

Get your company's research in the hands of targeted business professionals.