Skip to content
GigaSpaces Logo GigaSpaces Logo
  • Products, Solutions & Roles
    • Products
      • InsightEdge Portfolio
        • Smart Cache
        • Smart ODS
        • Smart Augmented Transactions
        • Compare InsightEdge Products
      • GigaSpaces Cloud
    • Solutions
      • Industry
        • Financial Services
        • Insurance
        • Retail and eCommerce
        • Telecommunications
        • Transportations
      • Technical
        • Operational BI
        • Mainframe & AS/400 Modernization
        • In Memory Data Grid
        • Transactional and Analytical Processing (HTAP)
        • Hybrid Cloud Data Fabric
        • Multi-Tiered Storage
        • Kubernetes Deployment
        • Streaming Analytics for Stateful Apps
    • Building a Successful Hybrid and Multicloud Strategy
      vid-icon Guide

      Learn how to build and deploy a successful hybrid and multicloud strategy to achieve: agility and scalability, faster go-to-market, application acceleration, legacy modernization, and more.

      DOWNLOAD
    • Contact
    • Try Free
  • Resources
    • Resource Hub
      • Webinars
      • Demos
      • Solution Briefs & Whitepapers
      • Case Studies
      • Benchmarks
      • ROI Calculators
      • Analyst Reports
      • eBooks
    • Featured Case Studies
      • Mainframe Offload with Groupe PSA
      • Digital Transformation with Avanza Bank
      • High Peak Handling with PriceRunner
      • Optimizing Business Communications with Avaya
    • col3
      • Blog
      • Technical Documentation
    • Live Webinar: Enable Digital Transformation With a High Performing Data Platform
      article-icon Live Webinar | March 11 - 9 AM EST/3 PM CET

      Join Capgemini and GigaSpaces for a discussion on the latest modernization trends for enterprises that are embarking on digital and business transformation

      REGISTER NOW
    • Contact Us
    • Try Free
  • Company
    • Col1
      • About
      • Customers
      • Management
      • Board Members
      • Investors
      • Events
      • News
      • Careers
    • Col2
      • Partners
      • OEM
      • System Integrators
      • Technology Partners
      • Value Added Resellers
    • Col3
      • Support & Services
      • University
      • Services
      • Support
    • GigaSpaces is Headed to CDAO 2021
      article-icon Event | March 2-4

      Join us at CDAO 2021, the premier Virtual Summit for Data and Analytics Leaders. We'll be moderating "Transforming Financial Services to a Customer-Centric Business", alongside USAA Bank, Regions Bank, and Capital Group.

      SIGN UP NOW
    • Contact Us
    • Try Free
  • Search
  • Contact Us
  • Try Free

The next big thing in big data: fast data

Subscribe to our blog!

Subscribe for Updates
Close
Back

The next big thing in big data: fast data

Becca Noy June 26, 2014
7 minutes read

The big data movement was pretty much driven by the demand for scale in velocity, volume, and variety of data. Those three vectors led to the emergence of a new generation of distributed data management platforms such as Hadoop for batch processing and NoSQL databases for interactive data access. Both were inspired by their respective predecessors Google (Hadoop, BigTable) and Amazon (Dynamo DB).
As we move to fast data, there’s more emphasis on processing big data at speed. Getting speed without compromising on scale pushes the limits in the way most of the existing big data solutions work and drives new models and technologies for breaking the current speed boundaries. New advancements on the hardware infrastructure with new flash drives offer great potential for breaking the current speed limit which is bounded mostly by the performance of hard drive devices.

Why using existing RDBMS, NoSQL on top of flash drive is not enough

Many of the existing databases — including more modern solutions such NoSQL and NewSQL — were designed to utilize the standard hard drive devices (HDD). These databases were designed with the assumption that disk access is slow and therefore they use many algorithms such as Bloom filters to save access to disk in case the data doesn’t exist. Another common algorithm is to use asynchronous write to commit-log. A good insight into all the optimizations that are often involved in a single write process on NoSQL databases is provided through the Cassandra Architecture.

What happens when disk speed is no longer the bottleneck?
When the speed of disk is no longer the bottleneck, as in the case of flash devices, then a lot of the optimization turns into overhead. In other words, with flash devices it will be faster and simpler to access the flash device directly for every write or read operation.
Flash is not just a fast disk
Most of the existing use cases use flash devices as faster disk. Using flash as a fast disk was a short path to bypass the disk performance overhead without the need to change much of the software and applications. Having said that, this approach inherits many unnecessary disk driver overhead. So to exploit the full speed potential of flash devices, it would be best to access the flash devices directly from the application and treat flash devices as a key/value store rather than a disk drive.
Why can’t we simply optimize the existing databases?
When we reach a point in which we need to change much of the existing assumptions and architecture of the existing databases to take advantage of new technology and devices such as flash, that’s a clear sign that local optimization isn’t going to cut it. This calls for new disruption.

The next big thing in big data

Given the background above, I believe that in the same way that demand for big data led to the birth of the current generation of data-management systems, the drive for fast data will also lead to new kinds of data management systems. Unlike the current set of databases, I believe that the next generation databases will be written natively to flash and will use direct access to flash rather than the regular disk-based access. In addition to that, those databases will include high performance event streaming capabilities as part of their core API to allow processing of the data as it comes in, and thus allowing real-time data processing.
 
Many of the existing databases weren’t designed to run in cloud as first-class citizens and quite often require fairly complex setup to run well in a cloud environment.
As cloud infrastructure matures, we now have more options to run big data workload on the cloud. The next generation databases need to be designed to run as a service from the get-go.
To avoid the latency that is often associated with such a setup, the next generation databases will need to run as close as possible to the application. Assuming that many of the applications will run in one cloud or another, it means that those databases will need to have built-in support for different cloud environments. In addition, they will need to leverage dynamic code shipping to pass code with the data, and in this way allow complex processing with minimum network hops.
In-memory databases and data grid are the closest candidates to drive next-gen flash databases
RAM and flash-based devices have much more in common than flash and hard drive. In both RAM and flash, access time is fairly low and wasn’t really the bottleneck; in-memory databases use direct access to the RAM device through key value interface to store and index data.
Those factors makes in-memory based databases more likely to fit into the next generation flash databases.
The combination of in-memory databases and data grid on top of flash devices also allows the system to overcome some of the key capacity and cost per GB limitations of an in-memory based solution. The integration of the two will allow an increase in the capacity per single node to the limit of the underlying flash device rather than to the limit of the size of RAM.
 
As can be seen in the diagram above, the IMC (in-memory computing) layer acts as the front end to the flash device and handles the transactional data access and stream processing, while the flash device is used as an extension of the RAM device from the application perspective.
There are basically two modes of integration between the RAM and flash device.
LRU Mode – In this mode, we use the RAM as a caching layer to the flash device. The RAM device holds the “Hot,” i.e. the most recently used, and the flash device holds the entire set.
Pros: Optimized for maximum capacity.
Cons: Limited query to simple key/value access
Fast Index Mode – In this mode, all the indices in RAM and the data in flash holds the entire set.
Pros: Supports complex querying, including range queries and aggregated functions.
Cons: Capacity is limited to the size of indexes that can be held in RAM.
In both cases, the access to flash is done directly using key/value interface and not through disk drive interface. As both RAM and flash drives are fairly fast, it is simpler to write data to the flash drive synchronously and in this way avoid potential complexity due to inconsistency.
It is also quite common to sync the data from the IMC and flash devices into an external data store that runs on a traditional hard drive. This process is done asynchronously and in batches to minimize the performance overhead.
Direct flash access in real numbers
To put some real numbers behind these statements, I wanted to refer to one of the recent benchmarks that was done using an in-memory data grid based on GigaSpaces XAP and direct flash access using a key/value software API that allows direct access to various flash devices. The benchmark was conducted on several devices as well as on private and public cloud-based services on AWS. The benchmark shows a ten times increase in managed data without compromising on performance.
 

CATEGORIES

  • architecture
  • Availability
  • Big Data
  • CEP
  • Data Grid
  • databases
  • Datacenter
  • ecommerce
  • GigaSpaces
  • hadoop
  • Java/J2EE
  • MySQL
  • networking
  • NewSQL
  • NoSQL
  • real-time
  • Scalability
  • ScaleDB
  • space-based architecture
  • storage
  • syndicated
Becca Noy

All Posts (16)

YOU MAY ALSO LIKE

August 13, 2009

The Word is Out on…
7 minutes read

January 18, 2010

Application Monitoring as a Service…
10 minutes read

May 9, 2009

R7.0 M8 – Feature Complete
3 minutes read
  • Copied to clipboard

PRODUCTS, SOLUTIONS & ROLES

  • Products
  • Smart Cache
  • Smart Operational Data Store
  • Smart Augmented Transactions
  • GigaSpaces Cloud
  • Roles
  • Architects
  • CIOs
  • Product Team
  • Solutions
  • Industry
    • Financial Services
    • Insurance
    • Retail and eCommerce
    • Telecommunications
    • Transportation
  • Technical
    • Operational BI
    • Mainframe & AS/400 Modernization
    • In Memory Data Grid
    • HTAP
    • Hybrid Cloud Data Fabric
    • Multi-Tiered Storage
    • Kubernetes Deployment
    • Streaming Analytics for Stateful Apps

RESOURCES

  • Resource Hub
  • Webinars
  • Demos
  • Case Studies
  • Solution Briefs & Whitepapers
  • Benchmarks
  • Cost Reduction Calculators
  • Analyst Reports
  • eBooks
  • Blogs
  • Documentation
  • Featured Case Studies
  • Mainframe Offload with Groupe PSA
  • Digital Transformation with Avanza Bank
  • High Peak Handling with PriceRunner
  • Optimizing Business Communications with Avaya

COMPANY

  • About
  • Customers
  • Management
  • Board Members
  • Investors
  • News
  • Events
  • Careers
  • Contact Us
  • Book A Demo
  • Try GigaSpaces For Free
  • Partners
  • OEM Partners
  • System Integrators
  • Value Added Resellers
  • Technology Partners
  • Support & Services
  • University
  • Services
  • Support
Copyright © GigaSpaces 2020 All rights reserved | Privacy Policy
LinkedInTwitterFacebookYouTube

Contact Us