- Advertising
- Bare Metal
- Bare Metal Cloud
- Benchmarks
- Big Data Benchmarks
- Big Data Experts Interviews
- Big Data Technologies
- Big Data Use Cases
- Big Data Week
- Cloud
- Data Lake as a Service
- Databases
- Dedicated Servers
- Disaster Recovery
- Features
- Fun
- GoTech World
- Hadoop
- Healthcare
- Industry Standards
- Insurance
- Linux
- News
- NoSQL
- Online Retail
- People of Bigstep
- Performance for Big Data Apps
- Press
- Press Corner
- Security
- Tech Trends
- Tutorial
- What is Big Data
This Big Data is Too Hot, This Big Data is Too Cold, and This Big Data is Just Right
Almost every business is aware of the growth of data and is trying to figure out ways to deal with it. Some organizations have begun a "big data initiative" while others are simply trying to determine the best way to store, process, and secure increasingly large sets of data.
Almost every business is aware of the growth of data and is trying to figure out ways to deal with it. Some organizations have begun a “big data initiative” while others are simply trying to determine the best way to store, process, and secure increasingly large sets of data.
Simultaneously, memory costs are dropping. In fact, the price falls about 30 percent every 18 months. Hence, many argue that memory is affordable enough to simply keep piling in the data. But this line of reasoning fails to figure in the facts that: 1. Memory prices are only going to drop so low. Adding data to memory indefinitely will eventually become cost prohibitive. 2. The IoT (Internet of Things) is rapidly increasing the amount of data organizations must contend with. Data isn’t just growing, data is continually growing ever more quickly.
Hence, we need a good way to determine what data goes into memory and what doesn’t. According to many database engineers, the best way to do this is to separate data by temperature. Yep! You can now sort your data like your laundry—hot, warm, and cold.
Identifying and Dealing With Hot Data
Hot data is data that is accessed frequently by your users and applications. It’s the everyday data you need to run your business operations. Hot data needs to be kept in memory so that it can be accessed regularly and easily. About 20 percent of your data is likely to be categorized as “hot”. In a normal environment, 90 percent of the time applications need to access data, it needs to access part of this 20 percent.
Identifying and Dealing With Cold Data
The other 80 percent or so of your data is hardly ever needed. It might be needed for an audit or for year-end reporting, and certainly doesn’t need to be tossed. But it does not need to be kept in expensive memory because it just can’t justify the costs. What can you do with cold data? Organizations are increasingly turning to data pools, where they can store an abundance of unstructured and structured data inexpensively for as long as they need to.
Identifying and Dealing With Warm Data
What is warm data and where does it go? Warm data is in between hot (you need to access it all the time) and cold (you rarely, if ever, need to access it). Where does warm data go? The answer to this question should be addressed by each organization individually. In situations where convenience and access is paramount and the budget can support it, it’s fine to keep warm data in memory. Otherwise, it can go in the data lake with the cold data.
Dealing with data by temperature gives you a good guide for keeping data storage costs low and your IT environment clean and uncluttered, while also managing datacenter and computing costs across the organization. It’s a rare win-win-win scenario.
Leave a Reply
Your email address will not be published.