parallel processing

Results 1 - 9 of 9Sort Results By: Published Date | Title | Company Name
Published By: WhereScape     Published Date: Aug 18, 2016
Data Vault 2.0 leverages parallel database processing for large data sets and provides an extensible approach to design that enables agile development. WhereScape provides data warehouse automation software solutions that enable Data Vault agile project delivery through accelerated development, documentation and deployment without sacrificing quality or flexibility.
Tags : 
    
WhereScape
Published By: Wave Computing     Published Date: Jul 06, 2018
This paper argues a case for the use of coarse grained reconfigurable array (CGRA) architectures for the efficient acceleration of the data flow computations used in deep neural network training and inferencing. The paper discusses the problems with other parallel acceleration systems such as massively parallel processor arrays (MPPAs) and heterogeneous systems based on CUDA and OpenCL, and proposes that CGRAs with autonomous computing features deliver improved performance and computational efficiency. The machine learning compute appliance that Wave Computing is developing executes data flow graphs using multiple clock-less, CGRA-based System on Chips (SoCs) each containing 16,000 processing elements (PEs). This paper describes the tools needed for efficient compilation of data flow graphs to the CGRA architecture, and outlines Wave Computing’s WaveFlow software (SW) framework for the online mapping of models from popular workflows like Tensorflow, MXNet and Caffe.
Tags : 
    
Wave Computing
Published By: Pure Storage     Published Date: Jan 12, 2018
Data is growing at amazing rates and will continue this rapid rate of growth. New techniques in data processing and analytics including AI, machine and deep learning allow specially designed applications to not only analyze data but learn from the analysis and make predictions. Computer systems consisting of multi-core CPUs or GPUs using parallel processing and extremely fast networks are required to process the data. However, legacy storage solutions are based on architectures that are decades old, un-scalable and not well suited for the massive concurrency required by machine learning. Legacy storage is becoming a bottleneck in processing big data and a new storage technology is needed to meet data analytics performance needs.
Tags : 
reporting, artificial intelligence, insights, organization, institution, recognition
    
Pure Storage
Published By: TIBCO Software APAC     Published Date: Aug 13, 2018
Big data has raised the bar for data virtualization products. To keep pace, TIBCO® Data Virtualization added a massively parallel processing engine that supports big-data scale workloads. Read this whitepaper to learn how it works.
Tags : 
    
TIBCO Software APAC
Published By: IBM     Published Date: Jul 06, 2017
DB2 is a proven database for handling the most demanding transactional workloads. But the trend as of late is to enable relational databases to handle analytic queries more efficiently by adding an inmemory column store alongside to aggregate data and provide faster results. IBM's BLU Acceleration technology does exactly that. While BLU isn't brand new, the ability to spread the column store across a massively parallel processing (MPP) cluster of up to 1,000 nodes is a new addition to the technology. That, along with simpler monthly pricing options and integration with dashDB data warehousing in the cloud, makes DB2 for LUW, a very versatile database.
Tags : 
memory analytics, database, efficiency, acceleration technology, aggregate data
    
IBM
Published By: IBM Watson Health     Published Date: Nov 10, 2017
To address the volume, velocity, and variety of data necessary for population health management, healthcare organizations need a big data solution that can integrate with other technologies to optimize care management, care coordination, risk identification and stratification and patient engagement. Read this whitepaper and discover how to build a data infrastructure using the right combination of data sources, a “data lake” framework with massively parallel computing that expedites the answering of queries and the generation of reports to support care teams, analytic tools that identify care gaps and rising risk, predictive modeling, and effective screening mechanisms that quickly find relevant data. In addition to learning about these crucial tools for making your organization’s data infrastructure robust, scalable, and flexible, get valuable information about big data developments such as natural language processing and geographical information systems. Such tools can provide insig
Tags : 
population health management, big data, data, data analytics, big data solution, data infrastructure, analytic tools, predictive modeling
    
IBM Watson Health
Published By: HP Data Center     Published Date: Feb 18, 2009
Today's data centers are embarking down a path in which "old world" business, technology, and facility metrics are being pushed aside in order to provide unparalleled service delivery capabilities, processes, and methodologies. The expectations derived from today’s high-density technology deployments are driving service delivery models to extremes with very high service delivery capabilities adopted as baseline requirements within today’s stringent business models. Part of the "revolution" that is driving today's data center modeling to unprecedented high performance and efficiency levels is the fact that computer processing advances with regard to high-performance and smaller footprints have truly countered each other.
Tags : 
hp data center, data center enfironment, high density computing, rack-mount servers, mep mechanical, electrical, and plumbing, virtualization, consolidation, it deployments, server consolidation, networking, storage
    
HP Data Center
Published By: Calpont     Published Date: Mar 13, 2012
In this white paper you will find insight as to how the Calpont InfiniDB database performs far beyond the abilities of a row-based database.
Tags : 
data warehouse, warehouse, benhcmark, infinidb, calpont, row-based, warehouse, mpp, parallel processing, columns, rows, column, row, data management
    
Calpont
Published By: IBM     Published Date: Jul 05, 2018
Scalable data platforms such as Apache Hadoop offer unparalleled cost benefits and analytical opportunities. IBM helps fully leverage the scale and promise of Hadoop, enabling better results for critical projects and key analytics initiatives. The end-to- end information capabilities of IBM® Information Server let you better understand data and cleanse, monitor, transform and deliver it. IBM also helps bridge the gap between business and IT with improved collaboration. By using Information Server “flexible integration” capabilities, the information that drives business and strategic initiatives—from big data and point-of- impact analytics to master data management and data warehousing—is trusted, consistent and governed in real time. Since its inception, Information Server has been a massively parallel processing (MPP) platform able to support everything from small to very large data volumes to meet your requirements, regardless of complexity. Information Server can uniquely support th
Tags : 
    
IBM
Search      

Add Research

Get your company's research in the hands of targeted business professionals.

We use technologies such as cookies to understand how you use our site and to provide a better user experience. This includes personalizing content, using analytics and improving site operations. We may share your information about your use of our site with third parties in accordance with our Privacy Policy. You can change your cookie settings as described here at any time, but parts of our site may not function correctly without them. By continuing to use our site, you agree that we can save cookies on your device, unless you have disabled cookies.
I Accept