source

Results 1 - 25 of 3356Sort Results By: Published Date | Title | Company Name
Published By: TD Bank Group     Published Date: Aug 10, 2018
This paper examines whether blockchain distributed ledger technology could improve the management of trusted information, specifically considering data quality. Improvement was determined by considering the impact of a distributed ledger as an authoritative source in TD Bank Group's Enterprise Data Quality Management Process versus the use of standard authoritative sources such as databases and files. Distributed ledger technology is not expected, or proven, to result in a change in the Data Quality Management process. Our analysis focused on execution advantages possible due to distributed ledger properties that make it an attractive resource for data quality management (DQM).
Tags : 
    
TD Bank Group
Published By: CData     Published Date: Jan 04, 2019
The growth of NoSQL continues to accelerate as the industry is increasingly forced to develop new and more specialized data structures to deal with the explosion of application and device data. At the same time, new data products for BI, Analytics, Reporting, Data Warehousing, AI, and Machine Learning continue along a similar growth trajectory. Enabling interoperability between applications and data sources, each with a unique interface and value proposition, is a tremendous challenge. This paper discusses a variety of mapping and flattening techniques, and continues with examples that highlight performance and usability differences between approaches.
Tags : 
data architecture, data, data management, business intelligence, data warehousing
    
CData
Published By: Alation     Published Date: Jan 06, 2017
90% of the time that is spent creating new reports is recreating information that already exists. Without a way to effectively share prior work and identify verified data sources, analysts and other data consumers lack shared context on how to apply data to analytic inquiries and business decision making. Time is wasted tracking down subject matter experts and trying to unearth tribal knowledge. Leading analytic organizations in retail, healthcare, financial services and technology are using data catalogs to help their analysts find, understand and use data appropriately. What are the 5 critical capabilities of a data catalog? Learn more here:
Tags : 
    
Alation
Published By: Alation     Published Date: Jan 06, 2017
90% of the time that is spent creating new reports is recreating information that already exists. Without a way to effectively share prior work and identify verified data sources, analysts and other data consumers lack shared context on how to apply data to analytic inquiries and business decision making. Time is wasted tracking down subject matter experts and trying to unearth tribal knowledge. Leading analytic organizations in retail, healthcare, financial services and technology are using data catalogs to help their analysts find, understand and use data appropriately. What are the 5 critical capabilities of a data catalog? Learn more here:
Tags : 
    
Alation
Published By: Databricks     Published Date: Sep 13, 2018
Learn how to get started with Apache Spark™ Apache Spark™’s ability to speed analytic applications by orders of magnitude, its versatility, and ease of use are quickly winning the market. With Spark’s appeal to developers, end users, and integrators to solve complex data problems at scale, it is now the most active open source project with the big data community. With rapid adoption by enterprises across a wide range of industries, Spark has been deployed at massive scale, collectively processing multiple petabytes of data on clusters of over 8,000 nodes. If you are a developer or data scientist interested in big data, learn how Spark may be the tool for you. Databricks is happy to present this ebook as a practical introduction to Spark. Download this ebook to learn: • Spark’s basic architecture • Why Spark is a popular choice for data analytics • What tools and features are available • How to get started right away through interactive sample code
Tags : 
    
Databricks
Published By: First San Francisco Partners     Published Date: Mar 13, 2015
A Data Governance Organization and its structure should be defined to align with your company’s organizational hierarchy and resources. Finding the right people to assign to data governance requires an understanding of both the functional and the political role of governance within your organization. This paper highlights some best practices in putting the right resources behind the required roles.
Tags : 
data governance, data governance resources, data governance organization
    
First San Francisco Partners
Published By: First San Francisco Partners     Published Date: Jun 01, 2016
A Data Governance Organization and its structure should be defined to align with your company’s organizational hierarchy and resources. Finding the right people to assign to data governance requires an understanding of both the functional and the political role of governance within your organization. This paper highlights some best practices in putting the right resources behind the required roles.
Tags : 
    
First San Francisco Partners
Published By: Couchbase     Published Date: Dec 04, 2014
Interactive applications have changed dramatically over the last 15 years. In the late ‘90s, large web companies emerged with dramatic increases in scale on many dimensions: · The number of concurrent users skyrocketed as applications increasingly became accessible · via the web (and later on mobile devices). · The amount of data collected and processed soared as it became easier and increasingly · valuable to capture all kinds of data. · The amount of unstructured or semi-structured data exploded and its use became integral · to the value and richness of applications. Dealing with these issues was more and more difficult using relational database technology. The key reason is that relational databases are essentially architected to run a single machine and use a rigid, schema-based approach to modeling data. Google, Amazon, Facebook, and LinkedIn were among the first companies to discover the serious limitations of relational database technology for supporting these new application requirements. Commercial alternatives didn’t exist, so they invented new data management approaches themselves. Their pioneering work generated tremendous interest because a growing number of companies faced similar problems. Open source NoSQL database projects formed to leverage the work of the pioneers, and commercial companies associated with these projects soon followed. Today, the use of NoSQL technology is rising rapidly among Internet companies and the enterprise. It’s increasingly considered a viable alternative to relational databases, especially as more organizations recognize that operating at scale is more effectively achieved running on clusters of standard, commodity servers, and a schema-less data model is often a better approach for handling the variety and type of data most often captured and processed today.
Tags : 
database, nosql, data, data management, white paper, why nosql, couchbase
    
Couchbase
Published By: Embarcadero     Published Date: Oct 21, 2014
Metadata defines the structure of data in files and databases, providing detailed information about entities and objects. In this white paper, Dr. Robin Bloor and Rebecca Jowiak of The Bloor Group discuss the value of metadata and the importance of organizing it well, which enables you to: - Collaborate on metadata across your organization - Manage disparate data sources and definitions - Establish an enterprise glossary of business definitions and data elements - Improve communication between teams
Tags : 
data, data management, enterprise data management, enterprise information management, metadata, robin bloor, rebecca jozwiak, embarcadero
    
Embarcadero
Published By: TopQuadrant     Published Date: Jun 01, 2017
This paper presents a practitioner informed roadmap intended to assist enterprises in maturing their Enterprise Information Management (EIM) practices, with a specific focus on improving Reference Data Management (RDM). Reference data is found in every application used by an enterprise including back-end systems, front-end commerce applications, data exchange formats, and in outsourced, hosted systems, big data platforms, and data warehouses. It can easily be 20–50% of the tables in a data store. And the values are used throughout the transactional and mastered data sets to make the system internally consistent.
Tags : 
    
TopQuadrant
Published By: MapR Technologies     Published Date: Mar 29, 2016
Add Big Data Technologies to Get More Value from Your Stack Taking advantage of big data starts with understanding how to optimize and augment your existing infrastructure. Relational databases have endured for a reason – they fit well with the types of data that organizations use to run their business. These types of data in business applications such as ERP, CRM, EPM, etc., are not fundamentally changing, which suggests that relational databases will continue to play a foundational role in enterprise architectures for the foreseeable future. One area where emerging technologies can complement relational database technologies is big data. With the rapidly growing volumes of data, along with the many new sources of data, organizations look for ways to relieve pressure from their existing systems. That’s where Hadoop and NoSQL come in.
Tags : 
    
MapR Technologies
Published By: Cambridge Semantics     Published Date: May 11, 2016
With the explosive growth of Big Data, IT professionals find their time and resources squeezed between managing increasingly large and diverse siloed data stores and increased user demands for timely, accurate data. The graph-based ANZO Smart Data Manager is built to relieve these burdens by automating the process of managing, cataloging and governing data at enterprise scale and security. Anzo Smart Data Manager allows companies to truly understand their data ecosystems and leverage the metadata within it.
Tags : 
    
Cambridge Semantics
Published By: MemSQL     Published Date: Jun 25, 2014
Emerging business innovations focused on realizing quick business value on new and growing data sources require “hybrid transactional and analytical processing” (HTAP), the notion of performing analysis on data directly in an operational data store. While this is not a new idea, Gartner reports that the potential for HTAP has not been fully realized due to technology limitations and inertia in IT departments. MemSQL offers a unique combination of performance, flexibility, and ease of use that allows companies to implement HTAP to power their business applications.
Tags : 
    
MemSQL
Published By: Melissa Data     Published Date: Jan 18, 2018
Maintaining high quality data is essential for operational efficiency, meaningful analytics and good long-term customer relationships. But, when dealing with multiple sources of data, data quality becomes complex, so you need to know when you should build a custom data quality tools over canned solutions. To answer this question, it is important to understand the difference between rules-based data quality, where internal subject matter expertise is necessary – and active data quality, where different domain expertise and resources are required.
Tags : 
    
Melissa Data
Published By: AnalytixDS     Published Date: May 09, 2016
This paper seeks to provide insight into how AnalytiX Data Services can address many of the data related issues that come with combining disparate source data as is often the case in mergers and acquisitions. It will cover some of the best features and functionality offered by AnalytiX Data Services’ flagship product AnalytiX Mapping Manager, which have specific capabilities that are of high value in this scenario.
Tags : 
    
AnalytixDS
Published By: CapTech     Published Date: May 26, 2015
Big Data is the future of business. According to CloudTweaks.com, as much as 2.5 quintillion bytes of data are produced each day, with most of this data being captured by Big Data. With its ability to transfer all data sources all into one centralized place, Big Data provides opportunities, clearer visions, customer conversations and transactions. However, with the dazzling big promise of Big Data comes a potentially huge letdown. If this vast pool of information resources is not accessible or usable, it becomes useless. This paper examines strategies for building the most value into your Big Data system by enabling process controls to effectively mine, access and secure Big Data.
Tags : 
big data, captech, data, data management, nosql
    
CapTech
Published By: Reltio     Published Date: Feb 12, 2016
Reltio delivers reliable data, relevant insights and recommended actions so companies can be right faster. Reltio Cloud combines data-driven applications with modern data management for better planning, customer engagement and risk management. IT streamlines data management for a complete view across all sources and formats at scale, while sales, marketing and compliance teams use data-driven applications to predict, collaborate and respond to opportunities in real-time. Companies of all sizes, including leading Fortune 500 companies in healthcare and life sciences, distribution and retail rely on Reltio.
Tags : 
    
Reltio
Published By: Reltio     Published Date: Jan 20, 2017
If you invested in master data management (MDM), you are part of an elite association of those who have been able to afford the time, effort and resources to deploy what has characteristically been a tool, and discipline reserved for only the largest enterprises. Feedback from top industry analysts and companies that transitioned from legacy MDM to modern data management platforms, led to the compilation of a list of 10 warning signs you can use as a handy guide. If one or more of these signs get your attention, it warrants a serious conversation with your current provider about these issues, and how they compare to modern offerings available today.
Tags : 
    
Reltio
Published By: Looker     Published Date: Mar 15, 2016
Data centralization merges different data streams into a common source through unified variables. This process can provide context to overly-broad metrics and enable cross-platform analytics to guide better business decisions. Investments in analytics tools are now paying back a 13.01:1 return on investment (ROI), with increased returns when these tools integrate with three or more data sour- ces. While the perks of centralization are obvious in theory, the quantity and variety of data available in today’s landscape make this difficult to achieve. This report provides a roadmap for how to connect systems, data stores, and institutions (both technological and human). Learn: • How data centralization enables better analytics • How to redefine data as a vehicle for change • How the right BI tool eliminates the data analyst bottleneck • How to define single sources of truth for your organization • How to build a data-driven (not just data-rich) organization
Tags : 
    
Looker
Published By: Innovative Systems     Published Date: Oct 26, 2017
Even after investing significant time and resources implementing a data quality solution, many enterprises find that their data does not effectively support their goals. This white paper shows how to get the most out of your data quality solution by tailoring it to support your business goals.
Tags : 
    
Innovative Systems
Published By: CloverETL     Published Date: Nov 24, 2017
The volume of data is increasing by 40% per year (Source: IDC). In addition, the structure and quality of data differs vastly with a growing number of data sources. More agile ways of working with data are required. This whitepaper discusses the vast options available for managing and storing data using data architectures, and offers use cases for each architecture. Furthermore, the whitepaper explores the benefits, drawbacks and challenges of each data architecture and commonly used practices for building these architectures.
Tags : 
    
CloverETL
Published By: Syncsort     Published Date: Jan 04, 2018
The term Big Data doesn’t seem quite “big enough” anymore to properly describe the vast over-abundance of data available to organizations today. As the volume and variety of Big Data sources continue to grow, the level of trust in that data remains troublingly low. Read on and discover how a strong focus on data quality spanning the people, processes and technology of your organization will help keep your data lake pristine.
Tags : 
    
Syncsort
Published By: Syncsort     Published Date: Jul 17, 2018
In most applications we use today, data is retrieved by the source code of the application and is then used to make decisions. The application is ultimately affected by the data, but source code determines how the application performs, how it does its work and how the data is used. Today, in a world of AI and machine learning, data has a new role – becoming essentially the source code for machine-driven insight. With AI and machine learning, the data is the core of what fuels the algorithm and drives results. Without a significant quantity of good quality data related to the problem, it’s impossible to create a useful model. Download this Whitepaper to learn why the process of identifying biases present in the data is an essential step towards debugging the data that underlies machine learning predictions and improves data quality.
Tags : 
    
Syncsort
Published By: Syncsort     Published Date: Oct 25, 2018
In most applications we use today, data is retrieved by the source code of the application and is then used to make decisions. The application is ultimately affected by the data, but source code determines how the application performs, how it does its work and how the data is used. Today, in a world of AI and machine learning, data has a new role – becoming essentially the source code for machine-driven insight. With AI and machine learning, the data is the core of what fuels the algorithm and drives results. Without a significant quantity of good quality data related to the problem, it’s impossible to create a useful model. Download this Whitepaper to learn why the process of identifying biases present in the data is an essential step towards debugging the data that underlies machine learning predictions and improves data quality.
Tags : 
    
Syncsort
Published By: Zoomdata     Published Date: Apr 11, 2018
How do you imagine data? If you’re thinking about it in terms of uniform records and databases, it’s time to make a brain update. Back in the day, analytical engines were limited, so our perception of what could be considered data was, too. Today, the big data renaissance has begun, and actually, more of the data exists outside of databases than inside them, plus, EVERYTHING is data. We’re going to help you discover how business intelligence and data sources of today have changed, and as a result, so has our approach to analyzing data. An eBook on this very topic is waiting for you—all it takes is the click of a button.
Tags : 
    
Zoomdata
Start   Previous   1 2 3 4 5 6 7 8 9 10 11 12 13 14 15    Next    End
Search      

Add Research

Get your company's research in the hands of targeted business professionals.

We use technologies such as cookies to understand how you use our site and to provide a better user experience. This includes personalizing content, using analytics and improving site operations. We may share your information about your use of our site with third parties in accordance with our Privacy Policy. You can change your cookie settings as described here at any time, but parts of our site may not function correctly without them. By continuing to use our site, you agree that we can save cookies on your device, unless you have disabled cookies.
I Accept