Simplified distributed block storage with strong consistency, like in Ceph
You can not select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.
 
 
 
 
 
 

2.5 KiB

Documentation → Introduction → Features


Читать на русском

Features

Server-side features

  • Basic part: highly-available block storage with symmetric clustering and no SPOF
  • Performance ;-D
  • Multiple redundancy schemes: Replication, XOR n+1, Reed-Solomon erasure codes based on jerasure and ISA-L libraries with any number of data and parity drives in a group
  • Configuration via simple JSON data structures in etcd (parameters, pools and images)
  • Automatic data distribution over OSDs, with support for:
    • Mathematical optimization for better uniformity and less data movement
    • Multiple pools
    • Placement tree, OSD selection by tags (device classes) and placement root
    • Configurable failure domains
  • Recovery of degraded blocks
  • Rebalancing (data movement between OSDs)
  • Lazy fsync support
  • Per-OSD and per-image I/O and space usage statistics in etcd
  • Snapshots and copy-on-write image clones
  • Write throttling to smooth random write workloads in SSD+HDD configurations
  • RDMA/RoCEv2 support via libibverbs

Plugins and tools

Roadmap

The following features are planned for the future:

  • Better OSD creation and auto-start tools
  • Other administrative tools
  • Web GUI
  • OpenNebula plugin
  • iSCSI proxy
  • Multi-threaded client
  • Faster failover
  • Scrubbing without checksums (verification of replicas)
  • Checksums
  • Tiered storage (SSD caching)
  • NVDIMM support
  • Compression (possibly)
  • Read caching using system page cache (possibly)