Simplified distributed block storage with strong consistency, like in Ceph
You can not select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long. 27 KiB

8 months ago
8 months ago
8 months ago
8 months ago
8 months ago
8 months ago
8 months ago
2 months ago
8 months ago
8 months ago
8 months ago
8 months ago
8 months ago
8 months ago
8 months ago
8 months ago
8 months ago
8 months ago
8 months ago
8 months ago
8 months ago
8 months ago
8 months ago
8 months ago
8 months ago
8 months ago
8 months ago
8 months ago
8 months ago
8 months ago
8 months ago
8 months ago
8 months ago
8 months ago
8 months ago
8 months ago
8 months ago
8 months ago
8 months ago
8 months ago
8 months ago
8 months ago
8 months ago
8 months ago
8 months ago
8 months ago
8 months ago
8 months ago
  1. ## Vitastor
  2. [Читать на русском](
  3. ## The Idea
  4. Make Software-Defined Block Storage Great Again.
  5. Vitastor is a small, simple and fast clustered block storage (storage for VM drives),
  6. architecturally similar to Ceph which means strong consistency, primary-replication, symmetric
  7. clustering and automatic data distribution over any number of drives of any size
  8. with configurable redundancy (replication or erasure codes/XOR).
  9. ## Features
  10. Vitastor is currently a pre-release, a lot of features are missing and you can still expect
  11. breaking changes in the future. However, the following is implemented:
  12. 0.5.x (stable):
  13. - Basic part: highly-available block storage with symmetric clustering and no SPOF
  14. - Performance ;-D
  15. - Multiple redundancy schemes: Replication, XOR n+1, Reed-Solomon erasure codes
  16. based on jerasure library with any number of data and parity drives in a group
  17. - Configuration via simple JSON data structures in etcd
  18. - Automatic data distribution over OSDs, with support for:
  19. - Mathematical optimization for better uniformity and less data movement
  20. - Multiple pools
  21. - Placement tree, OSD selection by tags (device classes) and placement root
  22. - Configurable failure domains
  23. - Recovery of degraded blocks
  24. - Rebalancing (data movement between OSDs)
  25. - Lazy fsync support
  26. - I/O statistics reporting to etcd
  27. - Generic user-space client library
  28. - QEMU driver (built out-of-tree)
  29. - Loadable fio engine for benchmarks (also built out-of-tree)
  30. - NBD proxy for kernel mounts
  31. - Inode removal tool (vitastor-rm)
  32. - Packaging for Debian and CentOS
  33. 0.6.x (master):
  34. - Per-inode I/O and space usage statistics
  35. - Inode metadata storage in etcd
  36. - Snapshots and copy-on-write image clones
  37. - Write throttling to smooth random write workloads in SSD+HDD configurations
  38. - RDMA/RoCEv2 support via libibverbs
  39. ## Roadmap
  40. - Better OSD creation and auto-start tools
  41. - Other administrative tools
  42. - Plugins for OpenStack, Kubernetes, OpenNebula, Proxmox and other cloud systems
  43. - iSCSI proxy
  44. - Faster failover
  45. - Scrubbing without checksums (verification of replicas)
  46. - Checksums
  47. - Tiered storage
  48. - NVDIMM support
  49. - Web GUI
  50. - Compression (possibly)
  51. - Read caching using system page cache (possibly)
  52. ## Architecture
  53. Similarities:
  54. - Just like Ceph, Vitastor has Pools, PGs, OSDs, Monitors, Failure Domains, Placement Tree.
  55. - Just like Ceph, Vitastor is transactional (even though there's a "lazy fsync mode" which
  56. doesn't implicitly flush every operation to disks).
  57. - OSDs also have journal and metadata and they can also be put on separate drives.
  58. - Just like in Ceph, client library attempts to recover from any cluster failure so
  59. you can basically reboot the whole cluster and only pause, but not crash, your clients
  60. (I consider this a bug if the client crashes in that case).
  61. Some basic terms for people not familiar with Ceph:
  62. - OSD (Object Storage Daemon) is a process that stores data and serves read/write requests.
  63. - PG (Placement Group) is a container for data that (normally) shares the same replicas.
  64. - Pool is a container for data that has the same redundancy scheme and placement rules.
  65. - Monitor is a separate daemon that watches cluster state and handles failures.
  66. - Failure Domain is a group of OSDs that you allow to fail. It's "host" by default.
  67. - Placement Tree groups OSDs in a hierarchy to later split them into Failure Domains.
  68. Architectural differences from Ceph:
  69. - Vitastor's primary focus is on SSDs. Proper SSD+HDD optimizations may be added in the future, though.
  70. - Vitastor OSD is (and will always be) single-threaded. If you want to dedicate more than 1 core
  71. per drive you should run multiple OSDs each on a different partition of the drive.
  72. Vitastor isn't CPU-hungry though (as opposed to Ceph), so 1 core is sufficient in a lot of cases.
  73. - Metadata and journal are always kept in memory. Metadata size depends linearly on drive capacity
  74. and data store block size which is 128 KB by default. With 128 KB blocks metadata should occupy
  75. around 512 MB per 1 TB (which is still less than Ceph wants). Journal doesn't have to be big,
  76. the example test below was conducted with only 16 MB journal. A big journal is probably even
  77. harmful as dirty write metadata also take some memory.
  78. - Vitastor storage layer doesn't have internal copy-on-write or redirect-write. I know that maybe
  79. it's possible to create a good copy-on-write storage, but it's much harder and makes performance
  80. less deterministic, so CoW isn't used in Vitastor.
  81. - The basic layer of Vitastor is block storage with fixed-size blocks, not object storage with
  82. rich semantics like in Ceph (RADOS).
  83. - There's a "lazy fsync" mode which allows to batch writes before flushing them to the disk.
  84. This allows to use Vitastor with desktop SSDs, but still lowers performance due to additional
  85. network roundtrips, so use server SSDs with capacitor-based power loss protection
  86. ("Advanced Power Loss Protection") for best performance.
  87. - PGs are ephemeral. This means that they aren't stored on data disks and only exist in memory
  88. while OSDs are running.
  89. - Recovery process is per-object (per-block), not per-PG. Also there are no PGLOGs.
  90. - Monitors don't store data. Cluster configuration and state is stored in etcd in simple human-readable
  91. JSON structures. Monitors only watch cluster state and handle data movement.
  92. Thus Vitastor's Monitor isn't a critical component of the system and is more similar to Ceph's Manager.
  93. Vitastor's Monitor is implemented in node.js.
  94. - PG distribution isn't based on consistent hashes. All PG mappings are stored in etcd.
  95. Rebalancing PGs between OSDs is done by mathematical optimization - data distribution problem
  96. is reduced to a linear programming problem and solved by lp_solve. This allows for almost
  97. perfect (96-99% uniformity compared to Ceph's 80-90%) data distribution in most cases, ability
  98. to map PGs by hand without breaking rebalancing logic, reduced OSD peer-to-peer communication
  99. (on average, OSDs have fewer peers) and less data movement. It also probably has a drawback -
  100. this method may fail in very large clusters, but up to several hundreds of OSDs it's perfectly fine.
  101. It's also easy to add consistent hashes in the future if something proves their necessity.
  102. - There's no separate CRUSH layer. You select pool redundancy scheme, placement root, failure domain
  103. and so on directly in pool configuration.
  104. ## Understanding Storage Performance
  105. The most important thing for fast storage is latency, not parallel iops.
  106. The best possible latency is achieved with one thread and queue depth of 1 which basically means
  107. "client load as low as possible". In this case IOPS = 1/latency, and this number doesn't
  108. scale with number of servers, drives, server processes or threads and so on.
  109. Single-threaded IOPS and latency numbers only depend on *how fast a single daemon is*.
  110. Why is it important? It's important because some of the applications *can't* use
  111. queue depth greater than 1 because their task isn't parallelizable. A notable example
  112. is any ACID DBMS because all of them write their WALs sequentially with fsync()s.
  113. fsync, by the way, is another important thing often missing in benchmarks. The point is
  114. that drives have cache buffers and don't guarantee that your data is actually persisted
  115. until you call fsync() which is translated to a FLUSH CACHE command by the OS.
  116. Desktop SSDs are very fast without fsync - NVMes, for example, can process ~80000 write
  117. operations per second with queue depth of 1 without fsync - but they're really slow with
  118. fsync because they have to actually write data to flash chips when you call fsync. Typical
  119. number is around 1000-2000 iops with fsync.
  120. Server SSDs often have supercapacitors that act as a built-in UPS and allow the drive
  121. to flush its DRAM cache to the persistent flash storage when a power loss occurs.
  122. This makes them perform equally well with and without fsync. This feature is called
  123. "Advanced Power Loss Protection" by Intel; other vendors either call it similarly
  124. or directly as "Full Capacitor-Based Power Loss Protection".
  125. All software-defined storages that I currently know are slow in terms of latency.
  126. Notable examples are Ceph and internal SDSes used by cloud providers like Amazon, Google,
  127. Yandex and so on. They're all slow and can only reach ~0.3ms read and ~0.6ms 4 KB write latency
  128. with best-in-slot hardware.
  129. And that's in the SSD era when you can buy an SSD that has ~0.04ms latency for 100 $.
  130. I use the following 6 commands with small variations to benchmark any storage:
  131. - Linear write:
  132. `fio -ioengine=libaio -direct=1 -invalidate=1 -name=test -bs=4M -iodepth=32 -rw=write -runtime=60 -filename=/dev/sdX`
  133. - Linear read:
  134. `fio -ioengine=libaio -direct=1 -invalidate=1 -name=test -bs=4M -iodepth=32 -rw=read -runtime=60 -filename=/dev/sdX`
  135. - Random write latency (T1Q1, this hurts storages the most):
  136. `fio -ioengine=libaio -direct=1 -invalidate=1 -name=test -bs=4k -iodepth=1 -fsync=1 -rw=randwrite -runtime=60 -filename=/dev/sdX`
  137. - Random read latency (T1Q1):
  138. `fio -ioengine=libaio -direct=1 -invalidate=1 -name=test -bs=4k -iodepth=1 -rw=randread -runtime=60 -filename=/dev/sdX`
  139. - Parallel write iops (use numjobs if a single CPU core is insufficient to saturate the load):
  140. `fio -ioengine=libaio -direct=1 -invalidate=1 -name=test -bs=4k -iodepth=128 [-numjobs=4 -group_reporting] -rw=randwrite -runtime=60 -filename=/dev/sdX`
  141. - Parallel read iops (use numjobs if a single CPU core is insufficient to saturate the load):
  142. `fio -ioengine=libaio -direct=1 -invalidate=1 -name=test -bs=4k -iodepth=128 [-numjobs=4 -group_reporting] -rw=randread -runtime=60 -filename=/dev/sdX`
  143. ## Vitastor's Theoretical Maximum Random Access Performance
  144. Replicated setups:
  145. - Single-threaded (T1Q1) read latency: 1 network roundtrip + 1 disk read.
  146. - Single-threaded write+fsync latency:
  147. - With immediate commit: 2 network roundtrips + 1 disk write.
  148. - With lazy commit: 4 network roundtrips + 1 disk write + 1 disk flush.
  149. - Saturated parallel read iops: min(network bandwidth, sum(disk read iops)).
  150. - Saturated parallel write iops: min(network bandwidth, sum(disk write iops / number of replicas / write amplification)).
  151. EC/XOR setups:
  152. - Single-threaded (T1Q1) read latency: 1.5 network roundtrips + 1 disk read.
  153. - Single-threaded write+fsync latency:
  154. - With immediate commit: 3.5 network roundtrips + 1 disk read + 2 disk writes.
  155. - With lazy commit: 5.5 network roundtrips + 1 disk read + 2 disk writes + 2 disk fsyncs.
  156. - 0.5 in actually (k-1)/k which means that an additional roundtrip doesn't happen when
  157. the read sub-operation can be served locally.
  158. - Saturated parallel read iops: min(network bandwidth, sum(disk read iops)).
  159. - Saturated parallel write iops: min(network bandwidth, sum(disk write iops * number of data drives / (number of data + parity drives) / write amplification)).
  160. In fact, you should put disk write iops under the condition of ~10% reads / ~90% writes in this formula.
  161. Write amplification for 4 KB blocks is usually 3-5 in Vitastor:
  162. 1. Journal block write
  163. 2. Journal data write
  164. 3. Metadata block write
  165. 4. Another journal block write for EC/XOR setups
  166. 5. Data block write
  167. If you manage to get an SSD which handles 512 byte blocks well (Optane?) you may
  168. lower 1, 3 and 4 to 512 bytes (1/8 of data size) and get WA as low as 2.375.
  169. Lazy fsync also reduces WA for parallel workloads because journal blocks are only
  170. written when they fill up or fsync is requested.
  171. ## Example Comparison with Ceph
  172. Hardware configuration: 4 nodes, each with:
  173. - 6x SATA SSD Intel D3-4510 3.84 TB
  174. - 2x Xeon Gold 6242 (16 cores @ 2.8 GHz)
  175. - 384 GB RAM
  176. - 1x 25 GbE network interface (Mellanox ConnectX-4 LX), connected to a Juniper QFX5200 switch
  177. CPU powersaving was disabled. Both Vitastor and Ceph were configured with 2 OSDs per 1 SSD.
  178. All of the results below apply to 4 KB blocks and random access (unless indicated otherwise).
  179. Raw drive performance:
  180. - T1Q1 write ~27000 iops (~0.037ms latency)
  181. - T1Q1 read ~9800 iops (~0.101ms latency)
  182. - T1Q32 write ~60000 iops
  183. - T1Q32 read ~81700 iops
  184. Ceph 15.2.4 (Bluestore):
  185. - T1Q1 write ~1000 iops (~1ms latency)
  186. - T1Q1 read ~1750 iops (~0.57ms latency)
  187. - T8Q64 write ~100000 iops, total CPU usage by OSDs about 40 virtual cores on each node
  188. - T8Q64 read ~480000 iops, total CPU usage by OSDs about 40 virtual cores on each node
  189. T8Q64 tests were conducted over 8 400GB RBD images from all hosts (every host was running 2 instances of fio).
  190. This is because Ceph has performance penalties related to running multiple clients over a single RBD image.
  191. cephx_sign_messages was set to false during tests, RocksDB and Bluestore settings were left at defaults.
  192. In fact, not that bad for Ceph. These servers are an example of well-balanced Ceph nodes.
  193. However, CPU usage and I/O latency were through the roof, as usual.
  194. Vitastor:
  195. - T1Q1 write: 7087 iops (0.14ms latency)
  196. - T1Q1 read: 6838 iops (0.145ms latency)
  197. - T2Q64 write: 162000 iops, total CPU usage by OSDs about 3 virtual cores on each node
  198. - T8Q64 read: 895000 iops, total CPU usage by OSDs about 4 virtual cores on each node
  199. - Linear write (4M T1Q32): 2800 MB/s
  200. - Linear read (4M T1Q32): 1500 MB/s
  201. T8Q64 read test was conducted over 1 larger inode (3.2T) from all hosts (every host was running 2 instances of fio).
  202. Vitastor has no performance penalties related to running multiple clients over a single inode.
  203. If conducted from one node with all primary OSDs moved to other nodes the result was slightly lower (689000 iops),
  204. this is because all operations resulted in network roundtrips between the client and the primary OSD.
  205. When fio was colocated with OSDs (like in Ceph benchmarks above), 1/4 of the read workload actually
  206. used the loopback network.
  207. Vitastor was configured with: `--disable_data_fsync true --immediate_commit all --flusher_count 8
  208. --disk_alignment 4096 --journal_block_size 4096 --meta_block_size 4096
  209. --journal_no_same_sector_overwrites true --journal_sector_buffer_count 1024
  210. --journal_size 16777216`.
  211. ### EC/XOR 2+1
  212. Vitastor:
  213. - T1Q1 write: 2808 iops (~0.355ms latency)
  214. - T1Q1 read: 6190 iops (~0.16ms latency)
  215. - T2Q64 write: 85500 iops, total CPU usage by OSDs about 3.4 virtual cores on each node
  216. - T8Q64 read: 812000 iops, total CPU usage by OSDs about 4.7 virtual cores on each node
  217. - Linear write (4M T1Q32): 3200 MB/s
  218. - Linear read (4M T1Q32): 1800 MB/s
  219. Ceph:
  220. - T1Q1 write: 730 iops (~1.37ms latency)
  221. - T1Q1 read: 1500 iops with cold cache (~0.66ms latency), 2300 iops after 2 minute metadata cache warmup (~0.435ms latency)
  222. - T4Q128 write (4 RBD images): 45300 iops, total CPU usage by OSDs about 30 virtual cores on each node
  223. - T8Q64 read (4 RBD images): 278600 iops, total CPU usage by OSDs about 40 virtual cores on each node
  224. - Linear write (4M T1Q32): 1950 MB/s before preallocation, 2500 MB/s after preallocation
  225. - Linear read (4M T1Q32): 2400 MB/s
  226. ### NBD
  227. NBD is currently required to mount Vitastor via kernel, but it imposes additional overhead
  228. due to additional copying between the kernel and userspace. This mostly hurts linear
  229. bandwidth, not iops.
  230. Vitastor with single-thread NBD on the same hardware:
  231. - T1Q1 write: 6000 iops (0.166ms latency)
  232. - T1Q1 read: 5518 iops (0.18ms latency)
  233. - T1Q128 write: 94400 iops
  234. - T1Q128 read: 103000 iops
  235. - Linear write (4M T1Q128): 1266 MB/s (compared to 2800 MB/s via fio)
  236. - Linear read (4M T1Q128): 975 MB/s (compared to 1500 MB/s via fio)
  237. ## Installation
  238. ### Debian
  239. - Trust Vitastor package signing key:
  240. `wget -q -O - | sudo apt-key add -`
  241. - Add Vitastor package repository to your /etc/apt/sources.list:
  242. - Debian 11 (Bullseye/Sid): `deb bullseye main`
  243. - Debian 10 (Buster): `deb buster main`
  244. - For Debian 10 (Buster) also enable backports repository:
  245. `deb buster-backports main`
  246. - Install packages: `apt update; apt install vitastor lp-solve etcd linux-image-amd64 qemu`
  247. ### CentOS
  248. - Add Vitastor package repository:
  249. - CentOS 7: `yum install`
  250. - CentOS 8: `dnf install`
  251. - Enable EPEL: `yum/dnf install epel-release`
  252. - Enable additional CentOS repositories:
  253. - CentOS 7: `yum install centos-release-scl`
  254. - CentOS 8: `dnf install centos-release-advanced-virtualization`
  255. - Enable elrepo-kernel:
  256. - CentOS 7: `yum install`
  257. - CentOS 8: `dnf install`
  258. - Install packages: `yum/dnf install vitastor lpsolve etcd kernel-ml qemu-kvm`
  259. ### Building from Source
  260. - Install Linux kernel 5.4 or newer, for io_uring support. 5.8 or later is highly recommended because
  261. there is at least one known io_uring hang with 5.4 and an HP SmartArray controller.
  262. - Install liburing 0.4 or newer and its headers.
  263. - Install lp_solve.
  264. - Install etcd, at least version 3.4.15. Earlier versions won't work because of various bugs,
  265. for example [#12402]( You can also take 3.4.13
  266. with this specific fix from here:, branch release-3.4.
  267. - Install node.js 10 or newer.
  268. - Install gcc and g++ 8.x or newer.
  269. - Clone with submodules.
  270. - Install QEMU 3.0+, get its source, begin to build it, stop the build and copy headers:
  271. - `<qemu>/include` &rarr; `<vitastor>/qemu/include`
  272. - Debian:
  273. * Use qemu packages from the main repository
  274. * `<qemu>/b/qemu/config-host.h` &rarr; `<vitastor>/qemu/b/qemu/config-host.h`
  275. * `<qemu>/b/qemu/qapi` &rarr; `<vitastor>/qemu/b/qemu/qapi`
  276. - CentOS 8:
  277. * Use qemu packages from the Advanced-Virtualization repository. To enable it, run
  278. `yum install centos-release-advanced-virtualization.noarch` and then `yum install qemu`
  279. * `<qemu>/config-host.h` &rarr; `<vitastor>/qemu/b/qemu/config-host.h`
  280. * For QEMU 3.0+: `<qemu>/qapi` &rarr; `<vitastor>/qemu/b/qemu/qapi`
  281. * For QEMU 2.0+: `<qemu>/qapi-types.h` &rarr; `<vitastor>/qemu/b/qemu/qapi-types.h`
  282. - `config-host.h` and `qapi` are required because they contain generated headers
  283. - You can also rebuild QEMU with a patch that makes LD_PRELOAD unnecessary to load vitastor driver.
  284. See `qemu-*.*-vitastor.patch`.
  285. - Install fio 3.7 or later, get its source and symlink it into `<vitastor>/fio`.
  286. - Build & install Vitastor with `mkdir build && cd build && cmake .. && make -j8 && make install`.
  287. Pay attention to the `QEMU_PLUGINDIR` cmake option - it must be set to `qemu-kvm` on RHEL.
  288. ## Running
  289. Please note that startup procedure isn't currently simple - you specify configuration
  290. and calculate disk offsets almost by hand. This will be fixed in near future.
  291. - Get some SATA or NVMe SSDs with capacitors (server-grade drives). You can use desktop SSDs
  292. with lazy fsync, but prepare for inferior single-thread latency.
  293. - Get a fast network (at least 10 Gbit/s).
  294. - Disable CPU powersaving: `cpupower idle-set -D 0 && cpupower frequency-set -g performance`.
  295. - Check `/usr/lib/vitastor/mon/` and `/usr/lib/vitastor/mon/` and
  296. put desired values into the variables at the top of these files.
  297. - Create systemd units for the monitor and etcd: `/usr/lib/vitastor/mon/`
  298. - Create systemd units for your OSDs: `/usr/lib/vitastor/mon/ /dev/disk/by-partuuid/XXX [/dev/disk/by-partuuid/YYY ...]`
  299. - You can edit the units and change OSD configuration. Notable configuration variables:
  300. - `disable_data_fsync 1` - only safe with server-grade drives with capacitors.
  301. - `immediate_commit all` - use this if all your drives are server-grade.
  302. - `disable_device_lock 1` - only required if you run multiple OSDs on one block device.
  303. - `flusher_count 256` - flusher is a micro-thread that removes old data from the journal.
  304. You don't have to worry about this parameter anymore, 256 is enough.
  305. - `disk_alignment`, `journal_block_size`, `meta_block_size` should be set to the internal
  306. block size of your SSDs which is 4096 on most drives.
  307. - `journal_no_same_sector_overwrites true` prevents multiple overwrites of the same journal sector.
  308. Most (99%) SSDs don't need this option. But Intel D3-4510 does because it doesn't like when you
  309. overwrite the same sector twice in a short period of time. The setting forces Vitastor to never
  310. overwrite the same journal sector twice in a row which makes D3-4510 almost happy. Not totally
  311. happy, because overwrites of the same block can still happen in the metadata area... When this
  312. setting is set, it is also required to raise `journal_sector_buffer_count` setting, which is the
  313. number of dirty journal sectors that may be written to at the same time.
  314. - `systemctl start` everywhere.
  315. - Create global configuration in etcd: `etcdctl --endpoints=... put /vitastor/config/global '{"immediate_commit":"all"}'`
  316. (if all your drives have capacitors).
  317. - Create pool configuration in etcd: `etcdctl --endpoints=... put /vitastor/config/pools '{"1":{"name":"testpool","scheme":"replicated","pg_size":2,"pg_minsize":1,"pg_count":256,"failure_domain":"host"}}'`.
  318. For jerasure pools the configuration should look like the following: `2:{"name":"ecpool","scheme":"jerasure","pg_size":4,"parity_chunks":2,"pg_minsize":2,"pg_count":256,"failure_domain":"host"}`.
  319. - At this point, one of the monitors will configure PGs and OSDs will start them.
  320. - You can check PG states with `etcdctl --endpoints=... get --prefix /vitastor/pg/state`. All PGs should become 'active'.
  321. ### Name an image
  322. ```
  323. etcdctl --endpoints=<etcd> put /vitastor/config/inode/<pool>/<inode> '{"name":"<name>","size":<size>[,"parent_id":<parent_inode_number>][,"readonly":true]}'
  324. ```
  325. For example:
  326. ```
  327. etcdctl --endpoints= put /vitastor/config/inode/1/1 '{"name":"testimg","size":2147483648}'
  328. ```
  329. If you specify parent_id the image becomes a CoW clone. I.e. all writes go to the new inode and reads first check it
  330. and then upper layers. You can then make parent readonly by updating its entry with `"readonly":true` for safety and
  331. basically treat it as a snapshot.
  332. So to create a snapshot you basically rename the previous upper layer (for example from testimg to testimg@0), make it readonly
  333. and create a new top layer with the original name (testimg) and the previous one as a parent.
  334. ### Run fio benchmarks
  335. fio command example:
  336. ```
  337. fio -thread -name=test -bs=4M -direct=1 -iodepth=16 -rw=write -etcd= -image=testimg
  338. ```
  339. If you don't want to access your image by name, you can specify pool number, inode number and size
  340. (`-pool=1 -inode=1 -size=400G`) instead of the image name (`-image=testimg`).
  341. ### Upload VM image
  342. Use qemu-img and `vitastor:etcd_host=<HOST>:image=<IMAGE>` disk filename. For example:
  343. ```
  344. qemu-img convert -f qcow2 debian10.qcow2 -p -O raw 'vitastor:etcd_host=\:2379/v3:image=testimg'
  345. ```
  346. Note that the command requires to be run with `LD_PRELOAD=/usr/lib/x86_64-linux-gnu/qemu/ qemu-img ...`
  347. if you use unmodified QEMU.
  348. You can also specify `:pool=<POOL>:inode=<INODE>:size=<SIZE>` instead of `:image=<IMAGE>`
  349. if you don't want to use inode metadata.
  350. ### Start a VM
  351. Run QEMU with `-drive file=vitastor:etcd_host=<HOST>:image=<IMAGE>` and use 4 KB physical block size.
  352. For example:
  353. ```
  354. qemu-system-x86_64 -enable-kvm -m 1024
  355. -drive 'file=vitastor:etcd_host=\:2379/v3:image=testimg',format=raw,if=none,id=drive-virtio-disk0,cache=none
  356. -device virtio-blk-pci,scsi=off,bus=pci.0,addr=0x5,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1,write-cache=off,physical_block_size=4096,logical_block_size=512
  357. -vnc
  358. ```
  359. You can also specify `:pool=<POOL>:inode=<INODE>:size=<SIZE>` instead of `:image=<IMAGE>`,
  360. just like in qemu-img.
  361. ### Remove inode
  362. Use vitastor-rm. For example:
  363. ```
  364. vitastor-rm --etcd_address --pool 1 --inode 1 --parallel_osds 16 --iodepth 32
  365. ```
  366. ### NBD
  367. To create a local block device for a Vitastor image, use NBD. For example:
  368. ```
  369. vitastor-nbd map --etcd_address --image testimg
  370. ```
  371. It will output the device name, like /dev/nbd0 which you can then format and mount as a normal block device.
  372. Again, you can use `--pool <POOL> --inode <INODE> --size <SIZE>` insteaf of `--image <IMAGE>` if you want.
  373. ## Known Problems
  374. - Object deletion requests may currently lead to 'incomplete' objects in EC pools
  375. if your OSDs crash during deletion because proper handling of object cleanup
  376. in a cluster should be "three-phase" and it's currently not implemented.
  377. Just repeat the removal request again in this case.
  378. ## Implementation Principles
  379. - I like architecturally simple solutions. Vitastor is and will always be designed
  380. exactly like that.
  381. - I also like reinventing the wheel to some extent, like writing my own HTTP client
  382. for etcd interaction instead of using prebuilt libraries, because in this case
  383. I'm confident about what my code does and what it doesn't do.
  384. - I don't care about C++ "best practices" like RAII or proper inheritance or usage of
  385. smart pointers or whatever and I don't intend to change my mind, so if you're here
  386. looking for ideal reference C++ code, this probably isn't the right place.
  387. - I like node.js better than any other dynamically-typed language interpreter
  388. because it's faster than any other interpreter in the world, has neutral C-like
  389. syntax and built-in event loop. That's why Monitor is implemented in node.js.
  390. ## Author and License
  391. Copyright (c) Vitaliy Filippov (vitalif [at], 2019+
  392. Join Vitastor Telegram Chat:
  393. All server-side code (OSD, Monitor and so on) is licensed under the terms of
  394. Vitastor Network Public License 1.1 (VNPL 1.1), a copyleft license based on
  395. GNU GPLv3.0 with the additional "Network Interaction" clause which requires
  396. opensourcing all programs directly or indirectly interacting with Vitastor
  397. through a computer network and expressly designed to be used in conjunction
  398. with it ("Proxy Programs"). Proxy Programs may be made public not only under
  399. the terms of the same license, but also under the terms of any GPL-Compatible
  400. Free Software License, as listed by the Free Software Foundation.
  401. This is a stricter copyleft license than the Affero GPL.
  402. Please note that VNPL doesn't require you to open the code of proprietary
  403. software running inside a VM if it's not specially designed to be used with
  404. Vitastor.
  405. Basically, you can't use the software in a proprietary environment to provide
  406. its functionality to users without opensourcing all intermediary components
  407. standing between the user and Vitastor or purchasing a commercial license
  408. from the author 😀.
  409. Client libraries (cluster_client and so on) are dual-licensed under the same
  410. VNPL 1.1 and also GNU GPL 2.0 or later to allow for compatibility with GPLed
  411. software like QEMU and fio.
  412. You can find the full text of VNPL-1.1 in the file [VNPL-1.1.txt](VNPL-1.1.txt).
  413. GPL 2.0 is also included in this repository as [GPL-2.0.txt](GPL-2.0.txt).