- Advertising
- Bare Metal
- Bare Metal Cloud
- Benchmarks
- Big Data Benchmarks
- Big Data Experts Interviews
- Big Data Technologies
- Big Data Use Cases
- Big Data Week
- Cloud
- Data Lake as a Service
- Databases
- Dedicated Servers
- Disaster Recovery
- Features
- Fun
- GoTech World
- Hadoop
- Healthcare
- Industry Standards
- Insurance
- Linux
- News
- NoSQL
- Online Retail
- People of Bigstep
- Performance for Big Data Apps
- Press
- Press Corner
- Security
- Tech Trends
- Tutorial
- What is Big Data
Bigstep cited again as one of Netcraft’s most reliable infrastructure providers
This is starting to become a habit. The latest Netcraft figures that look at the most reliable infrastructure providers in the world have been revealed and yet again Bigstep is featured in the top ten.
This takes the Netcraft count for Bigstep to eight times in the past 11 analyses, one of the most consistent records of any company to feature in the Netcraft lists.
We know that the world’s most powerful public computing infrastructure is of little use without the reliability to go with it – here’s how we do it.
Further industry recognition for Bigstep
For a relatively new organisation, Bigstep is making a habit of scooping up industry rewards and recognition. We were recently declared ‘newcomer of the year’ at the UK Cloud awards 2014 and have maintained an almost constant presence in the Netcraft top 10 since we launched last year, an impressive feat that has continued into March.
The Netcraft top 10 is an independent and authoratitive analysis of the most reliable infrastructure providers. Netcraft measures and makes available the response times of leading hosting providers’ sites.
In March 2014 we were 7th in the top 10, due to a highly impressive 0.022% rate of failed requests and a connection time of 0.065ms. Given that so few of our regular IaaS competition feature in the Netcraft lists, this is something to celebrate. But while our customers value such reliability, it is really only half the story.
The world’s most powerful public computing infrastructure
Speed is of the essence when it comes to processing big data. People want analysis in real-time and running applications such as Hadoop in a virtual environment is a major restriction on performance.
Hypervisors are to blame for this. Many businesses have moved their data to a virtual environment, but while there are advantages to doing this, speed and performance are not among them.
The very best hypervisors in the market will still waste a minimum of 20% of the bare metal power of servers. When speed is so important in getting value from big data this is an unaccetpable drop in performance and in many cases it is actually much worse.
So we have removed the hypervisor completely. Forbes recently said that this made our infrastructure ‘perfect for crunching Big Data in high volumes and at high speed’ and we are not going to argue!
But what really makes such speed so attractive to customers is that it comes with 100% uptime – an intriguing combination, wouldn’t you say?
Leave a Reply
Your email address will not be published.