O'Reilly logo

Stay ahead with the world's most comprehensive technology and business learning platform.

With Safari, you learn the way you learn best. Get unlimited access to videos, live online training, learning paths, books, tutorials, and more.

Start Free Trial

No credit card required

Cassandra High Availability

Book Description

Harness the power of Apache Cassandra to build scalable, fault-tolerant, and readily available applications

In Detail

Apache Cassandra is a massively scalable, peer-to-peer database designed for 100 percent uptime, with deployments in the tens of thousands of nodes supporting petabytes of data.

This book offers readers a practical insight into building highly available, real-world applications using Apache Cassandra. The book starts with the fundamentals, helping you to understand how the architecture of Apache Cassandra allows it to achieve 100 percent uptime when other systems struggle to do so. You'll have an excellent understanding of data distribution, replication, and Cassandra's highly tunable consistency model. This is followed by an in-depth look at Cassandra's robust support for multiple data centers, and how to scale out a cluster. Next, the book explores the domain of application design, with chapters discussing the native driver and data modeling. Lastly, you'll find out how to steer clear of common antipatterns and take advantage of Cassandra's ability to fail gracefully.

What You Will Learn

  • Understand the core architecture of Cassandra that enables highly available applications
  • Use replication and tunable consistency levels to balance consistency, availability, and performance
  • Set up multiple data centers to enable failover, load balancing, and geographic distribution
  • Add capacity to your cluster with zero downtime
  • Take advantage of high availability features in the native driver
  • Create data models that scale well and maximize availability
  • Understand how to avoid common antipatterns
  • Keep your system working well even during failure scenarios

Downloading the example code for this book. You can download the example code files for all Packt books you have purchased from your account at http://www.PacktPub.com. If you purchased this book elsewhere, you can visit http://www.PacktPub.com/support and register to have the files e-mailed directly to you.

Table of Contents

  1. Cassandra High Availability
    1. Table of Contents
    2. Cassandra High Availability
    3. Credits
    4. About the Author
    5. About the Reviewers
    6. www.PacktPub.com
      1. Support files, eBooks, discount offers, and more
        1. Why subscribe?
        2. Free access for Packt account holders
    7. Preface
      1. What this book covers
      2. What you need for this book
      3. Who this book is for
      4. Conventions
      5. Reader feedback
      6. Customer support
        1. Errata
        2. Piracy
        3. Questions
    8. 1. Cassandra's Approach to High Availability
      1. ACID
      2. The monolithic architecture
      3. The master-slave architecture
        1. Sharding
        2. Master failover
      4. Cassandra's solution
      5. Cassandra's architecture
        1. Distributed hash table
        2. Replication
          1. Replication across data centers
        3. Tunable consistency
          1. The CAP theorem
      6. Summary
    9. 2. Data Distribution
      1. Hash table fundamentals
        1. Distributing hash tables
      2. Consistent hashing
        1. The mechanics of consistent hashing
      3. Token assignment
        1. Manually assigned tokens
        2. vnodes
          1. How vnodes improve availability
            1. Adding and removing nodes
            2. Node rebuilding
            3. Heterogeneous nodes
      4. Partitioners
        1. Hotspots
          1. Effects of scaling out using ByteOrderedPartitioner
          2. A time-series example
      5. Summary
    10. 3. Replication
      1. The replication factor
        1. Replication strategies
          1. SimpleStrategy
          2. NetworkTopologyStrategy
      2. Snitches
        1. Maintaining the replication factor when a node fails
      3. Consistency conflicts
        1. Consistency levels
        2. Repairing data
      4. Balancing the replication factor with consistency
      5. Summary
    11. 4. Data Centers
      1. Use cases for multiple data centers
        1. Live backup
        2. Failover
        3. Load balancing
        4. Geographic distribution
        5. Online analysis
          1. Analysis using Hadoop
          2. Analysis using Spark
      2. Data center setup
        1. RackInferringSnitch
        2. PropertyFileSnitch
        3. GossipingPropertyFileSnitch
        4. Cloud snitches
      3. Replication across data centers
        1. Setting the replication factor
        2. Consistency in a multiple data center environment
          1. The anatomy of a replicated write
          2. Achieving stronger consistency between data centers
      4. Summary
    12. 5. Scaling Out
      1. Choosing the right hardware configuration
      2. Scaling out versus scaling up
      3. Growing your cluster
        1. Adding nodes without vnodes
        2. Adding nodes with vnodes
      4. How to scale out
        1. Adding a data center
      5. How to scale up
        1. Upgrading in place
        2. Scaling up using data center replication
      6. Removing nodes
        1. Removing nodes within a data center
        2. Decommissioning a data center
      7. Other data migration scenarios
      8. Snitch changes
      9. Summary
    13. 6. High Availability Features in the Native Java Client
      1. Thrift versus the native protocol
      2. Setting up the environment
      3. Connecting to the cluster
      4. Executing statements
        1. Prepared statements
        2. Batched statements
          1. Caution with batches
      5. Handling asynchronous requests
        1. Running queries in parallel
      6. Load balancing
      7. Failing over to a remote data center
        1. Downgrading the consistency level
          1. Defining your own retry policy
        2. Token awareness
      8. Tying it all together
        1. Falling back to QUORUM
      9. Summary
    14. 7. Modeling for High Availability
      1. How Cassandra stores data
        1. Implications of a log-structured storage
      2. Understanding compaction
        1. Size-tiered compaction
        2. Leveled compaction
        3. Date-tiered compaction
      3. CQL under the hood
        1. Single primary key
        2. Compound keys
          1. Partition keys
          2. Clustering columns
          3. Composite partition keys
        3. The importance of the storage model
      4. Understanding queries
        1. Query by key
        2. Range queries
        3. Denormalizing with collections
      5. How collections are stored
        1. Sets
        2. Lists
        3. Maps
      6. Working with time-series data
      7. Designing for immutability
        1. Modeling sensor data
          1. Queries
          2. Time-based ordering
            1. Using a sentinel value
            2. Satisfying our queries
            3. When time is all that matters
      8. Working with geospatial data
      9. Summary
    15. 8. Antipatterns
      1. Multikey queries
      2. Secondary indices
        1. Secondary indices under the hood
      3. Distributed joins
      4. Deleting data
        1. Garbage collection
        2. Resurrecting the dead
        3. Unexpected deletes
        4. The problem with tombstones
        5. Expiring columns
          1. TTL antipatterns
        6. When null does not mean empty
        7. Cassandra is not a queue
      5. Unbounded row growth
      6. Summary
    16. 9. Failing Gracefully
      1. Knowledge is power
        1. Monitoring via Java Management Extensions
          1. Using OpsCenter
        2. Choosing a management toolset
      2. Logging
        1. Cassandra logs
        2. Garbage collector logs
      3. Monitoring node metrics
        1. Thread pools
        2. Column family statistics
        3. Finding latency outliers
        4. Communication metrics
      4. When a node goes down
        1. Marking a downed node
        2. Handling a downed node
        3. Handling slow nodes
      5. Backing up data
        1. Taking a snapshot
        2. Incremental backups
        3. Restoring from a snapshot
      6. Summary
    17. Index