DataStax Enterprise is known for blazing fast analytics, continuous availability, strong security and operational simplicity.
- High performance in-memory analytics on bare metal.
- Maximum throughput inside the cluster with up to 40 Gbps per node.
- Lightning fast write performance on all-SSD storage.
- Automated management of clusters across regions.
The Best Performance Meets the Best Analytics
DataStax Enterprise is built on Apache Cassandra, one of the most popular NoSQL databases in the world. Cassandra is a fast, scalable and flexible NoSQL database with MapReduce capabilities and excellent write performance. Cassandra also has Hadoop and Spark support, exposing an HDFS-like interface to maintain data locality if ran inside the same nodes.
- Production-certified Apache Cassandra for today’s intense, always-on environments.
- Enterprise search and integrated analytics on Cassandra data.
- Workload management for transactional, analytical, and search operations.
- In-memory computing option for lightning-fast transactional and analytics workloads.
- Comprehensive enterprise security features.
- Automatic management services transparently handle administration and performance.
- Advanced visual management for key administration and performance monitoring.
- Software maintenance, support and performance reviews with recommendations.
Nodes are organized into two instance arrays: seed nodes and regular nodes. Seed nodes are similar to regular nodes, but have the extra role of initiating the Gossip inter-node information exchange protocol. They are usually set up with the same characteristics.
We enable virtual nodes in Cassandra, but we also calculate the initial tokens so that nodes are balanced appropriately from the beginning. Virtual nodes can be turned off in case Solr is needed. However, if Soir is not needed, we recommend keeping the setting on as it will help balance the load when additional nodes are added.
Deployment of DataStax on Metal Cloud
This requires adding extra compute instances to the instance array. The DataStax on the bare metal cloud cluster is automatically notified and the new nodes are assigned a place in the ring. Decommissioning a compute instance is also possible but only after the eviction of all data from the respective node. We recommend adding nodes in symmetrical increments (such as 4, 8, 16) so that they can be properly balanced across the murmur3 partitioner.
Scaling vertically is also possible: simply update the characteristics of the instance array, like the RAM and the CPU, accordingly. After the next reboot the nodes will have the updated configuration.