How does Cassandra work
Big data - information lived anew (Part II)
In the ORDIX® news issues 3/2012 to 2/2013 [1- 4] there was already a series of articles about NoSQL databases. As part of our Big Data series, this issue now includes an encore with an article about Apache Cassandra.
Cassandra was originally developed by Facebook and combines important features of Google's BigTable and Amazon's Dynamo database. Cassandra is one of the column-oriented databases and is comparable to HBase, for example. The most important functionalities of Cassandra include scalability and high availability as well as very fast writing and reading of individual data records. In addition to the open source version, which is being developed under the umbrella of the Apache Foundation, there is a commercial Enterprise Edition of DataStax.
Cassandra is a distributed database and is operated in a cluster. The cluster can also consist of a single node. This is particularly useful for development and testing. A Cassandra cluster consists of a ring of peer nodes. There is no master node and therefore no single point of failure. The data is usually stored redundantly within the cluster. Usually a replication factor of two or three is chosen. The data is replicated automatically.
If one node fails, the remaining nodes take over and after a while the data is automatically redistributed in order to achieve the selected replication factor again. It is also very easy to expand the cluster by adding new nodes (see Figure 1). Both the storage space and the throughput of operations scale almost linearly. This makes it very easy to set up a high-availability and easily scalable system.
Logical & Physical IO
The extremely fast writing and reading of data is achieved through an architecture that is also known from relational databases (see Figures 2 and 3). Cassandra physically stores the data in so-called SSTables. When data is written, it is not written directly to an SSTable. Instead, the operation is logged in the commit log file and the memtable in memory is updated. This completes the write operation from the client's point of view. The writing of the data from the mem table to the SSTable is carried out asynchronously. The commit log guarantees the durability of the operation, but is also written asynchronously for performance reasons [Q8].
When reading, the Bloom filter is first used to determine where the requested data is most likely stored. The position of the data record within a file is then determined with the aid of a cache. In the last step, the data is read from the disk and sent to the client.
Cassandra's logical model is similar to that of a relational database. There are tables, columns and primary keys. Figure 4 shows the most important elements.
Wide rows are an important characteristic of Cassandra. This property can be thought of as a table within a record. An example of the use of wide rows is the storage of measured values from a weather station. A new line is created for each day. A data record with the measured data is then saved within the line for each measured value (see Figure 5). All data of a day can then be queried very efficiently using the partition key and a single measured value can be accessed directly with the complete primary key.
In Cassandra, efficient data access is only possible using the partition key or the complete primary key. A normalized data model will therefore not work in many cases. Even when modeling the data, it is important to know the required queries and to store the data in such a way that efficient queries are possible. This means that redundant storage of the data cannot always be avoided.
No exclusive access to objects in the database is required to expand the data model. This makes it easy to create new tables or add new columns under full load. As shown in Figure 5, different data records in a table can also have different columns.
In addition to storing data in individual columns of the database, it is also common to store more complex objects as BLOB or JSON documents. Cassandra does not offer native support for this, so serialization and deserialization must be performed by the application. In practice, however, this is not a big problem, as there are a large number of freely available libraries for these tasks.
Another very interesting functionality is the so-called "Time to live" (TTL). With INSERT or UPDATE, a lifetime for individual attributes of a data record can be specified via the TTL. After the time has expired, the data in the queries are no longer visible and are automatically deleted.
CQL - the slightly different SQL
The query language CQL (Cassandra Query Language) and the shell cqlsh are also similar to relational databases. CQL is very similar to SQL. In addition to commands for querying data, DML and DDL commands are also part of the scope of the language. In contrast to SQL, however, there is no JOIN and no GROUP BY, for example. The shell is comparable to sqlplus from Oracle. It is used for interactive work with the database and allows the execution of CQL commands. The shell also has its own commands with which, for example, the database catalog can be output (DESCRIBE) or data can be imported or exported (COPY).
As already mentioned, Cassandra is ideally suited for writing and reading individual data sets very quickly. The possibility of executing transactions is therefore still missing for an OLTP system. Cassandra does not fully support ACID transactions. With Tunable Consistency, Lightweight Transactions and Batch Operations, Cassandra offers some very useful functionalities with which many requirements can be met.
Tunable consistency means that the consistency of the data (consistency level) and the availability of the system can be defined for each individual write and read operation. With a higher requirement for consistency, the availability is automatically reduced and vice versa. Cassandra offers a lot of fine-tuning options here. The three most important are explained in Figure 6. When we talk about nodes in this context, we mean only those nodes that are involved in the replication of the data to be read or written.
By using lightweight transactions, you can ensure that no updates are lost for a single write operation (INSERT or UPDATE). In CQL this is achieved using the IF keyword. The INSERT statement in the example in Figure 7 is only executed if there is not yet a data record with the specified primary key. The UPDATE statement is only executed if the password still has the expected value of '123'. Lightweight transactions are always carried out as an atomic operation. However, the execution is relatively time-consuming and should therefore be used with caution.
With batch operations it is possible to perform multiple write operations as a single atomic operation. This can be used to ensure the consistency of data in multiple tables. However, if data is distributed to several nodes within a batch operation, then the individual nodes must synchronize. This in turn usually has a negative impact on performance. If a lot of data is inserted on a single node (for example in a single wide row), then the performance can be increased significantly with batch operations
Cassandra itself does not offer any analytical functions. Only simple SELECT statements are possible in CQL. As already mentioned, there is neither a GROUP BY nor a JOIN. In addition, the WHERE condition is severely restricted because the partition key must be specified.
It is common practice to create your own tables for the required analytical queries. This can be an index table to find the necessary data, or the data is stored pre-aggregated in tables with "Counter Columns".
If that is not enough, it can be retrofitted externally:
- DataStax Enterprise:
The commercial Cassandra version offers the possibility of performing analytical evaluations both in real time and in batches.
- BI tools:
Many BI tools offer a connection to Cassandra. Examples are Penthao, Talend or Jaspersoft.
On the DataStax homepage there are drivers for Spark and Hunk [Q6]. There are other projects, for example, on GitHub [Q7].
In addition to the open source version of Cassandra, DataStax also offers the commercial DataStax Enterprise (DSE). This is a particularly intensively tested Cassandra version (DataStax Enterprise Server), which has been expanded to include additional tools and functionalities. In particular, these are:
- DataStax OpsCenter (administration interface)
- Backup and recovery
- Service and support (24x7)
- Analytics (Hadoop and Spark)
- Enterprise Search (Solr)
- In-memory option
- Security functions (LDAP, encryption)
Even if Cassandra is very flexible, the database is not equally well suited for all use cases. Here are a few examples in which Cassandra has proven itself very well in practice:
- Internet of Things:
Due to the very high writing speed, measured values from sensors can be saved and processed very efficiently.
- Fraud Detection:
Due to the high speed, a decision can be made within a few milliseconds whether a transaction is a possible case of fraud. The flexible data model, which can be expanded at runtime, enables the company to react quickly to new threats.
Instead of writing log files, it is possible to save all data in the database.
- Session cache:
Cassandra can be used as a very fast and flexible session cache in web applications.
Cassandra is not the first choice whenever the analytical evaluation is in the foreground. Even if there are solutions for this, for example with the DataStax Enterprise Edition, it should be checked very carefully in individual cases whether Cassandra is the right tool.
If full ACID support is required, the decision is again quite easy. Since Cassandra cannot do this, either a classic RDBMS or an ACID-compliant NoSQL database, such as Neo4J, must be used.
This article has given a rough overview of Apache Cassandra and the possible areas of application. The documentation on the DataStax homepage is a particularly good choice for further familiarization.
In practice, Cassandra has already proven itself in many projects. Due to the many similarities to relational databases, the switch is not difficult. However, it is important to understand that the database works differently internally than an RDBMS. This must be taken into account, particularly when designing the application and the data model. Our experts will be happy to support you in this.
- Is life after death true or not
- How films affect our sense of perception
- How can I make 100 today 2
- Can we schedule posts in Salesforce Chatter
- How is nitrogen made in a laboratory
- Regret buying an old house
- What is a jamoke
- How are Marxism and Christianity similar?
- Why should I consult an audiologist
- How does the virtual reality extend the reality
- What is the neo-Malthusian theory
- How can you create an Upwork profile
- How can I lighten a hair dye
- What are some animes I might like
- What is 2 frac13 as a decimal number
- What is a private rental taxi service
- How can I easily earn money online
- What Lotus Notes is used for
- What is meant by Thallophyta
- Kickass is forever a stream
- What is the formula for Paythgoras
- Expenses for youth sports
- Which novel is best suited for expanding knowledge
- Which cricket career has Virat Kohli destroyed