store

Results 1 - 25 of 961Sort Results By: Published Date | Title | Company Name
Published By: DATAVERSITY     Published Date: Nov 05, 2014
Ask any CEO if they want to better leverage their data assets to drive growth, revenues, and productivity, their answer will most likely be “yes, of course.” Ask many of them what that means or how they will do it and their answers will be as disparate as most enterprise’s data strategies. To successfully control, utilize, analyze, and store the vast amounts of data flowing through organization’s today, an enterprise-wide approach is necessary. The Chief Data Officer (CDO) is the newest member of the executive suite in many organizations worldwide. Their task is to develop and implement the strategies needed to harness the value of an enterprise’s data, while working alongside the CEO, CIO, CTO, and other executives. They are the vital “data” bridge between business and IT. This paper is sponsored by: Paxata and CA Technologies
Tags : 
chief data officer, cdo, data, data management, research paper, dataversity
    
DATAVERSITY
Published By: DATAVERSITY     Published Date: Jul 06, 2015
The growth of NoSQL data storage solutions have revolutionized the way enterprises are dealing with their data. The older, relational platforms are still being utilized by most organizations, while the implementation of varying NoSQL platforms including Key-Value, Wide Column, Document, Graph, and Hybrid data stores are increasing at faster rates than ever seen before. Such implementations are causing enterprises to revise their Data Management procedures across-the-board from governance to analytics, metadata management to software development, data modeling to regulation and compliance. The time-honored techniques for data modeling are being rewritten, reworked, and modified in a multitude of different ways, often wholly dependent on the NoSQL platform under development. The research report analyzes a 2015 DATAVERSITY® survey titled “Modeling NoSQL.” The survey examined a number of crucial issues within the NoSQL world today, with focus on data modeling in particular.
Tags : 
    
DATAVERSITY
Published By: Ted Hills     Published Date: Mar 08, 2017
NoSQL database management systems give us the opportunity to store our data according to more than one data storage model, but our entity-relationship data modeling notations are stuck in SQL land. Is there any need to model schema-less databases, and is it even possible? In this short white paper, Ted Hills examines these questions in light of a recent paper from MarkLogic on the hybrid data model.
Tags : 
    
Ted Hills
Published By: Denodo     Published Date: Feb 07, 2019
With the advent of big data and the proliferation of multiple information channels, organizations must store, discover, access, and share massive volumes of traditional and new data sources. Data virtualization transcends the limitations of traditional data integration techniques such as ETL by delivering a simplified, unified, and integrated view of trusted business data. Learn how you can: • Conquer siloed data in the enterprise • Integrate all data sources and types • Cope with regulatory requirements • Deliver big data solutions that work • Take the pain out of cloud adoption • Drive digital transformation
Tags : 
    
Denodo
Published By: MarkLogic     Published Date: Jun 17, 2015
Modern enterprises face increasing pressure to deliver business value through technological innovation that leverages all available data. At the same time, those enterprises need to reduce expenses to stay competitive, deliver results faster to respond to market demands, use real-time analytics so users can make informed decisions, and develop new applications with enhanced developer productivity. All of these factors put big data at the top of the agenda. Unfortunately, the promise of big data has often failed to deliver. With the growing volumes of unstructured and multi-structured data flooding into our data centers, the relational databases that enterprises have relied on for the last 40-years are now too limiting and inflexible. New-generation NoSQL (“Not Only SQL”) databases have gained popularity because they are ideally suited to deal with the volume, velocity, and variety of data that businesses and governments handle today.
Tags : 
data, data management, databse, marklogic, column store, wide column store, nosql
    
MarkLogic
Published By: TopQuadrant     Published Date: Jun 01, 2017
This paper presents a practitioner informed roadmap intended to assist enterprises in maturing their Enterprise Information Management (EIM) practices, with a specific focus on improving Reference Data Management (RDM). Reference data is found in every application used by an enterprise including back-end systems, front-end commerce applications, data exchange formats, and in outsourced, hosted systems, big data platforms, and data warehouses. It can easily be 20–50% of the tables in a data store. And the values are used throughout the transactional and mastered data sets to make the system internally consistent.
Tags : 
    
TopQuadrant
Published By: Paxata     Published Date: Nov 29, 2016
Every organization looks for ways to reduce costs and run more efficiently. In fact, those are key drivers for the mainstream adoption of Hadoop and self-service BI tools. And while we can now collect and store more data than ever before, and we have enabled every information worker into a data-hungry analyst, not much consideration has been paid to the cost - including time and effort - of preparing data. Download this report to learn more about the hidden cost of data preparation.
Tags : 
    
Paxata
Published By: Paxata     Published Date: Mar 21, 2017
Every organization looks for ways to reduce costs and run more efficiently. In fact, those are key drivers for the mainstream adoption of Hadoop and self-service BI tools. And while we can now collect and store more data than ever before, and we have enabled every information worker into a data-hungry analyst, not much consideration has been paid to the cost - including time and effort - of preparing data. Download this report to learn more about the hidden cost of data preparation.
Tags : 
    
Paxata
Published By: Cambridge Semantics     Published Date: May 11, 2016
With the explosive growth of Big Data, IT professionals find their time and resources squeezed between managing increasingly large and diverse siloed data stores and increased user demands for timely, accurate data. The graph-based ANZO Smart Data Manager is built to relieve these burdens by automating the process of managing, cataloging and governing data at enterprise scale and security. Anzo Smart Data Manager allows companies to truly understand their data ecosystems and leverage the metadata within it.
Tags : 
    
Cambridge Semantics
Published By: MemSQL     Published Date: Jun 25, 2014
Emerging business innovations focused on realizing quick business value on new and growing data sources require “hybrid transactional and analytical processing” (HTAP), the notion of performing analysis on data directly in an operational data store. While this is not a new idea, Gartner reports that the potential for HTAP has not been fully realized due to technology limitations and inertia in IT departments. MemSQL offers a unique combination of performance, flexibility, and ease of use that allows companies to implement HTAP to power their business applications.
Tags : 
    
MemSQL
Published By: Basho     Published Date: Sep 30, 2016
The Internet of Things (IoT) or the Internet of Everything is changing the way companies interact with their customers and manage their data. These connected devices generate high volume time series data that can be created in milliseconds. This fast growth of IoT data and other time series data is producing challenges for enterprise applications where data must be collected, saved, and analyzed in the blink of an eye. Your application needs a database built to uniquely handle time series data to ensure your data is continuously available and accurate.Learn about the only NoSQL database optimized for IoT and Time Series data in this technical overview. Riak TS stores and analyzes massive amounts of data and is designed to be faster than Cassandra.
Tags : 
    
Basho
Published By: AnalytixDS     Published Date: May 04, 2018
The General Data Protection Regulation (GDPR) is a regulation by which the European Parliament, the Council of the European Union and the European Commission intend to strengthen and unify data protection for all individuals within the European Union (EU). It also addresses the export of personal data outside the EU. The General Data Protection Regulation (GDPR) will go into effect from May 25 2018, making organizations accountable for personal data protection including how and where data is stored and how it is processed within the organization. Get ready for the most comprehensive governance and automation platform in the industry.
Tags : 
    
AnalytixDS
Published By: Silwood Technology     Published Date: Nov 28, 2016
Business functions in large organizations are usually handled by software application packages. Some of the most well-known of these are from SAP, Oracle, Salesforce and Microsoft. These packages all store their data in a database. Often however it is necessary to use that data with other IT projects. In this instance being able to understand the metadata that defines these databases is critical. The challenge is that their metadata is complex, opaque and difficult to access. This paper describes how the top application packages store and use their own metadata. It explores the importance of understanding that metadata and examines the obstacles in getting at that metadata in a timely and effective manner.
Tags : 
    
Silwood Technology
Published By: Silwood Technology     Published Date: Mar 21, 2017
Business functions in large organizations are usually handled by software application packages. Some of the most well-known of these are from SAP, Oracle, Salesforce and Microsoft. These packages all store their data in a database. Often however it is necessary to use that data with other IT projects. In this instance being able to understand the metadata that defines these databases is critical. The challenge is that their metadata is complex, opaque and difficult to access. This paper describes how the top application packages store and use their own metadata. It explores the importance of understanding that metadata and examines the obstacles in getting at that metadata in a timely and effective manner.
Tags : 
    
Silwood Technology
Published By: Looker     Published Date: Mar 15, 2016
Data centralization merges different data streams into a common source through unified variables. This process can provide context to overly-broad metrics and enable cross-platform analytics to guide better business decisions. Investments in analytics tools are now paying back a 13.01:1 return on investment (ROI), with increased returns when these tools integrate with three or more data sour- ces. While the perks of centralization are obvious in theory, the quantity and variety of data available in today’s landscape make this difficult to achieve. This report provides a roadmap for how to connect systems, data stores, and institutions (both technological and human). Learn: • How data centralization enables better analytics • How to redefine data as a vehicle for change • How the right BI tool eliminates the data analyst bottleneck • How to define single sources of truth for your organization • How to build a data-driven (not just data-rich) organization
Tags : 
    
Looker
Published By: Snowflake Computing     Published Date: Feb 27, 2017
Snowflake’s cloud-built data warehouse delivers the performance, concurrency, simplicity and affordability needed to store and analyze all of an organization’s data in one location. Snowflake combines the power of data warehousing, the flexibility of big data platforms and the elasticity of the cloud. Find out more at snowflake.net.
Tags : 
    
Snowflake Computing
Published By: Bitwise     Published Date: Apr 30, 2018
Organizations that adopt an enterprise data lake model for real-time, self-service and advanced analytics require a fresh approach and outlook to develop a Data Governance strategy as Hadoop changes the way that organizations ingest and store data, as well as how business partners access and use data. This paper outlines pillars for Hadoop Data Governance and Security that provide a framework that can be applied to any company.
Tags : 
    
Bitwise
Published By: Basho     Published Date: Nov 25, 2015
The landscape of Scalable Operational and Analytical Systems is changing and disrupting the norm of using relational databases for all workloads. With the growing need to process and analyze Big Data at Scale, the demand for alternative strategies has grown and has given rise to the emergence of NoSQL databases for scalable processing. Mike Ferguson, Managing Director of Intelligent Business Strategies, is an independent IT Analyst who specializes in Big Data, BI/Analytics, Data Management and Enterprise Business Integration. In this whitepaper he will discuss the movement towards NoSQL databases for scalable operational and analytical systems, what’s driving Big Data analytics from Hadoop to the emergence of Apache Spark, the value of operational analytics and the importance of in-memory processing, and why use Apache Spark as your in-memory analytical platform for operational analytics.
Tags : 
    
Basho
Published By: MarkLogic     Published Date: Jun 16, 2013
The primary issue discussed within this paper boils down to two disparate database reliability models: ACID vs BASE. The first (ACID) has been around for some 30+ years, is a proven industry standard for SQL-centric and other relational databases, and works remarkably well in the older, yet still extant, world of vertical scaling. The second (BASE) has only recently gained popularity over the past 10 years or so, especially with the rise of social networking, Big Data, NoSQL, and other leviathans in the new world of Data Management. BASE requirements rose out of a need for ever-expanding horizontally scaled distributed networks, with non-relational data stores, and the real-time availability constraints of web-based transaction processing. While there are now more crossovers and negotiations between the two models, they essentially represent two competing groups, with Brewer’s CAP Theorem acting as the referee in the middle forcing tough decisions on each team.
Tags : 
data, data management, unstructured data, nosql, database, acid, base, database transactioning
    
MarkLogic
Published By: MapR Technologies     Published Date: Jul 26, 2013
Enterprises are faced with new requirements for data. We now have big data that is different from the structured, cleansed corporate data repositories of the past.Before, we had to plan out structured queries. In the Hadoop world, we don’t have to sort data according to a predetermined schema when we collect it. We can store data as it arrives and decide what to do with it later. Today, there are different ways to analyze data collected in Hadoop—but which one is the best way forward?
Tags : 
white paper, hadoop, nosql, mapr, mapr technologies
    
MapR Technologies
Published By: Ted Hills     Published Date: Mar 29, 2016
NoSQL database management systems give us the opportunity to store our data according to more than one data storage model, but our entity-relationship data modeling notations are stuck in SQL land. Is there any need to model schema-less databases, and is it even possible? In this short white paper, Ted Hills examines these questions in light of a recent paper from MarkLogic on the hybrid data model. Ted Hills has been active in the Information Technology industry since 1975. At LexisNexis, Ted co-leads the work of establishing enterprise data architecture standards and governance processes, working with data models and business and data definitions for both structured and unstructured data. His book, NoSQL and SQL Data Modeling, was recently released by Technics Publications (http://technicspub.com).
Tags : 
    
Ted Hills
Published By: TIBCO Software EMEA     Published Date: Jan 17, 2019
Over the past decade, businesses have made tremendous investments in information capture, storage, and analysis. But having a wealth of data isn’t the same as having valuable information. Data Virtualization products and services provide a way to turn your data stores into valuable information that improves decision-making and propels experimentation and innovation. This paper explains: What data virtualization is and its benefits When and when not to use data virtualization How to deploy data virtualization to benefit your business How to unlock the value of your SAP data
Tags : 
virtualization, data, users, analytics, sql, application, access, directory, outcomes, costs
    
TIBCO Software EMEA
Published By: TIBCO Software EMEA     Published Date: Jan 17, 2019
Over the past decade, businesses have made tremendous investments in information capture, storage, and analysis. But having a wealth of data isn’t the same as having valuable information. TIBCO® Data Virtualization products and services provide a way to turn your data stores into valuable information that improves decision-making and propels experimentation and innovation. This paper explains: •?What data virtualization is and its benefits •?When and when not to use data virtualization •?How to deploy data virtualization to benefit your business
Tags : 
virtualization, data, users, analytics, sql, application, access, directory, outcomes, costs
    
TIBCO Software EMEA
Published By: TIBCO Software GmbH     Published Date: Jan 15, 2019
Over the past decade, businesses have made tremendous investments in information capture, storage, and analysis. But having a wealth of data isn’t the same as having valuable information. Data Virtualization products and services provide a way to turn your data stores into valuable information that improves decision-making and propels experimentation and innovation. This paper explains: •? What data virtualization is and its benefits •? When and when not to use data virtualization •? How to deploy data virtualization to benefit your business •? How to unlock the value of your SAP data
Tags : 
    
TIBCO Software GmbH
Published By: TIBCO Software GmbH     Published Date: Jan 15, 2019
Over the past decade, businesses have made tremendous investments in information capture, storage, and analysis. But having a wealth of data isn’t the same as having valuable information. TIBCO® Data Virtualization products and services provide a way to turn your data stores into valuable information that improves decision-making and propels experimentation and innovation. This paper explains: •? What data virtualization is and its benefits •? When and when not to use data virtualization •? How to deploy data virtualization to benefit your business
Tags : 
    
TIBCO Software GmbH
Start   Previous   1 2 3 4 5 6 7 8 9 10 11 12 13 14 15    Next    End
Search      

Add Research

Get your company's research in the hands of targeted business professionals.

We use technologies such as cookies to understand how you use our site and to provide a better user experience. This includes personalizing content, using analytics and improving site operations. We may share your information about your use of our site with third parties in accordance with our Privacy Policy. You can change your cookie settings as described here at any time, but parts of our site may not function correctly without them. By continuing to use our site, you agree that we can save cookies on your device, unless you have disabled cookies.
I Accept