Simplified distributed block storage with strong consistency, like in Ceph
25개 이상의 토픽을 선택하실 수 없습니다. Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.

4 달 전
4 달 전
4 달 전
4 달 전
4 달 전
4 달 전
4 달 전
4 달 전
4 달 전
4 달 전
4 달 전
4 달 전
4 달 전
4 달 전
4 달 전
4 달 전
4 달 전
4 달 전
4 달 전
4 달 전
4 달 전
4 달 전
4 달 전
4 달 전
4 달 전
4 달 전
4 달 전
4 달 전
4 달 전
4 달 전
4 달 전
4 달 전
4 달 전
4 달 전
4 달 전
4 달 전
4 달 전
4 달 전
4 달 전
4 달 전
4 달 전
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194195196197198199200201202203204205206207208209210211212213214215216217218219220221222223224225226227228229230231232233234235236237238239240241242243244245246247248249250251252253254255256257258259260261262263264265266267268269270271272273274275276277278279280281282283284285286287288289290291292293294295296297298299300301302303304305306307308309310311312313314315316317318319320321322323324325326327328329330331332333334335336337338339340341342343344345346347348349350351352353354355356357358359360361362363364365366367368369370371372373374375376377378379380381382383384385386387388389390391392393394395396397398399400401402403404405406407408409410411412413414415416417418419420421422423424425426427428429430431432433434435436437438439440441442
  1. ## Vitastor
  2. ## The Idea
  3. Make Software-Defined Block Storage Great Again.
  4. Vitastor is a small, simple and fast clustered block storage (storage for VM drives),
  5. architecturally similar to Ceph which means strong consistency, primary-replication, symmetric
  6. clustering and automatic data distribution over any number of drives of any size
  7. with configurable redundancy (replication or erasure codes/XOR).
  8. ## Features
  9. Vitastor is currently a pre-release, a lot of features are missing and you can still expect
  10. breaking changes in the future. However, the following is implemented:
  11. - Basic part: highly-available block storage with symmetric clustering and no SPOF
  12. - Performance ;-D
  13. - Multiple redundancy schemes: Replication, XOR n+1, Reed-Solomon erasure codes
  14. based on jerasure library with any number of data and parity drives in a group
  15. - Configuration via simple JSON data structures in etcd
  16. - Automatic data distribution over OSDs, with support for:
  17. - Mathematical optimization for better uniformity and less data movement
  18. - Multiple pools
  19. - Placement tree, OSD selection by tags (device classes) and placement root
  20. - Configurable failure domains
  21. - Recovery of degraded blocks
  22. - Rebalancing (data movement between OSDs)
  23. - Lazy fsync support
  24. - I/O statistics reporting to etcd
  25. - Generic user-space client library
  26. - QEMU driver (built out-of-tree)
  27. - Loadable fio engine for benchmarks (also built out-of-tree)
  28. - NBD proxy for kernel mounts
  29. - Inode removal tool (vitastor-rm)
  30. - Packaging for Debian and CentOS
  31. - Per-inode I/O and space usage statistics
  32. ## Roadmap
  33. - OSD creation tool (OSDs currently have to be created by hand)
  34. - Other administrative tools
  35. - Proxmox and OpenNebula plugins
  36. - iSCSI proxy
  37. - Inode metadata storage in etcd
  38. - Snapshots and copy-on-write image clones
  39. - Operation timeouts and better failure detection
  40. - Scrubbing without checksums (verification of replicas)
  41. - Checksums
  42. - SSD+HDD optimizations, possibly including tiered storage and soft journal flushes
  43. - RDMA and NVDIMM support
  44. - Web GUI
  45. - Compression (possibly)
  46. - Read caching using system page cache (possibly)
  47. ## Architecture
  48. Similarities:
  49. - Just like Ceph, Vitastor has Pools, PGs, OSDs, Monitors, Failure Domains, Placement Tree.
  50. - Just like Ceph, Vitastor is transactional (even though there's a "lazy fsync mode" which
  51. doesn't implicitly flush every operation to disks).
  52. - OSDs also have journal and metadata and they can also be put on separate drives.
  53. - Just like in Ceph, client library attempts to recover from any cluster failure so
  54. you can basically reboot the whole cluster and only pause, but not crash, your clients
  55. (I consider this a bug if the client crashes in that case).
  56. Some basic terms for people not familiar with Ceph:
  57. - OSD (Object Storage Daemon) is a process that stores data and serves read/write requests.
  58. - PG (Placement Group) is a container for data that (normally) shares the same replicas.
  59. - Pool is a container for data that has the same redundancy scheme and placement rules.
  60. - Monitor is a separate daemon that watches cluster state and handles failures.
  61. - Failure Domain is a group of OSDs that you allow to fail. It's "host" by default.
  62. - Placement Tree groups OSDs in a hierarchy to later split them into Failure Domains.
  63. Architectural differences from Ceph:
  64. - Vitastor's primary focus is on SSDs. Proper SSD+HDD optimizations may be added in the future, though.
  65. - Vitastor OSD is (and will always be) single-threaded. If you want to dedicate more than 1 core
  66. per drive you should run multiple OSDs each on a different partition of the drive.
  67. Vitastor isn't CPU-hungry though (as opposed to Ceph), so 1 core is sufficient in a lot of cases.
  68. - Metadata and journal are always kept in memory. Metadata size depends linearly on drive capacity
  69. and data store block size which is 128 KB by default. With 128 KB blocks metadata should occupy
  70. around 512 MB per 1 TB (which is still less than Ceph wants). Journal doesn't have to be big,
  71. the example test below was conducted with only 16 MB journal. A big journal is probably even
  72. harmful as dirty write metadata also take some memory.
  73. - Vitastor storage layer doesn't have internal copy-on-write or redirect-write. I know that maybe
  74. it's possible to create a good copy-on-write storage, but it's much harder and makes performance
  75. less deterministic, so CoW isn't used in Vitastor.
  76. - The basic layer of Vitastor is block storage with fixed-size blocks, not object storage with
  77. rich semantics like in Ceph (RADOS).
  78. - There's a "lazy fsync" mode which allows to batch writes before flushing them to the disk.
  79. This allows to use Vitastor with desktop SSDs, but still lowers performance due to additional
  80. network roundtrips, so use server SSDs with capacitor-based power loss protection
  81. ("Advanced Power Loss Protection") for best performance.
  82. - PGs are ephemeral. This means that they aren't stored on data disks and only exist in memory
  83. while OSDs are running.
  84. - Recovery process is per-object (per-block), not per-PG. Also there are no PGLOGs.
  85. - Monitors don't store data. Cluster configuration and state is stored in etcd in simple human-readable
  86. JSON structures. Monitors only watch cluster state and handle data movement.
  87. Thus Vitastor's Monitor isn't a critical component of the system and is more similar to Ceph's Manager.
  88. Vitastor's Monitor is implemented in node.js.
  89. - PG distribution isn't based on consistent hashes. All PG mappings are stored in etcd.
  90. Rebalancing PGs between OSDs is done by mathematical optimization - data distribution problem
  91. is reduced to a linear programming problem and solved by lp_solve. This allows for almost
  92. perfect (96-99% uniformity compared to Ceph's 80-90%) data distribution in most cases, ability
  93. to map PGs by hand without breaking rebalancing logic, reduced OSD peer-to-peer communication
  94. (on average, OSDs have fewer peers) and less data movement. It also probably has a drawback -
  95. this method may fail in very large clusters, but up to several hundreds of OSDs it's perfectly fine.
  96. It's also easy to add consistent hashes in the future if something proves their necessity.
  97. - There's no separate CRUSH layer. You select pool redundancy scheme, placement root, failure domain
  98. and so on directly in pool configuration.
  99. ## Understanding Storage Performance
  100. The most important thing for fast storage is latency, not parallel iops.
  101. The best possible latency is achieved with one thread and queue depth of 1 which basically means
  102. "client load as low as possible". In this case IOPS = 1/latency, and this number doesn't
  103. scale with number of servers, drives, server processes or threads and so on.
  104. Single-threaded IOPS and latency numbers only depend on *how fast a single daemon is*.
  105. Why is it important? It's important because some of the applications *can't* use
  106. queue depth greater than 1 because their task isn't parallelizable. A notable example
  107. is any ACID DBMS because all of them write their WALs sequentially with fsync()s.
  108. fsync, by the way, is another important thing often missing in benchmarks. The point is
  109. that drives have cache buffers and don't guarantee that your data is actually persisted
  110. until you call fsync() which is translated to a FLUSH CACHE command by the OS.
  111. Desktop SSDs are very fast without fsync - NVMes, for example, can process ~80000 write
  112. operations per second with queue depth of 1 without fsync - but they're really slow with
  113. fsync because they have to actually write data to flash chips when you call fsync. Typical
  114. number is around 1000-2000 iops with fsync.
  115. Server SSDs often have supercapacitors that act as a built-in UPS and allow the drive
  116. to flush its DRAM cache to the persistent flash storage when a power loss occurs.
  117. This makes them perform equally well with and without fsync. This feature is called
  118. "Advanced Power Loss Protection" by Intel; other vendors either call it similarly
  119. or directly as "Full Capacitor-Based Power Loss Protection".
  120. All software-defined storages that I currently know are slow in terms of latency.
  121. Notable examples are Ceph and internal SDSes used by cloud providers like Amazon, Google,
  122. Yandex and so on. They're all slow and can only reach ~0.3ms read and ~0.6ms 4 KB write latency
  123. with best-in-slot hardware.
  124. And that's in the SSD era when you can buy an SSD that has ~0.04ms latency for 100 $.
  125. I use the following 6 commands with small variations to benchmark any storage:
  126. - Linear write:
  127. `fio -ioengine=libaio -direct=1 -invalidate=1 -name=test -bs=4M -iodepth=32 -rw=write -runtime=60 -filename=/dev/sdX`
  128. - Linear read:
  129. `fio -ioengine=libaio -direct=1 -invalidate=1 -name=test -bs=4M -iodepth=32 -rw=read -runtime=60 -filename=/dev/sdX`
  130. - Random write latency (T1Q1, this hurts storages the most):
  131. `fio -ioengine=libaio -direct=1 -invalidate=1 -name=test -bs=4k -iodepth=1 -fsync=1 -rw=randwrite -runtime=60 -filename=/dev/sdX`
  132. - Random read latency (T1Q1):
  133. `fio -ioengine=libaio -direct=1 -invalidate=1 -name=test -bs=4k -iodepth=1 -rw=randread -runtime=60 -filename=/dev/sdX`
  134. - Parallel write iops (use numjobs if a single CPU core is insufficient to saturate the load):
  135. `fio -ioengine=libaio -direct=1 -invalidate=1 -name=test -bs=4k -iodepth=128 [-numjobs=4 -group_reporting] -rw=randwrite -runtime=60 -filename=/dev/sdX`
  136. - Parallel read iops (use numjobs if a single CPU core is insufficient to saturate the load):
  137. `fio -ioengine=libaio -direct=1 -invalidate=1 -name=test -bs=4k -iodepth=128 [-numjobs=4 -group_reporting] -rw=randread -runtime=60 -filename=/dev/sdX`
  138. ## Vitastor's Theoretical Maximum Random Access Performance
  139. Replicated setups:
  140. - Single-threaded (T1Q1) read latency: 1 network roundtrip + 1 disk read.
  141. - Single-threaded write+fsync latency:
  142. - With immediate commit: 2 network roundtrips + 1 disk write.
  143. - With lazy commit: 4 network roundtrips + 1 disk write + 1 disk flush.
  144. - Saturated parallel read iops: min(network bandwidth, sum(disk read iops)).
  145. - Saturated parallel write iops: min(network bandwidth, sum(disk write iops / number of replicas / write amplification)).
  146. EC/XOR setups:
  147. - Single-threaded (T1Q1) read latency: 1.5 network roundtrips + 1 disk read.
  148. - Single-threaded write+fsync latency:
  149. - With immediate commit: 3.5 network roundtrips + 1 disk read + 2 disk writes.
  150. - With lazy commit: 5.5 network roundtrips + 1 disk read + 2 disk writes + 2 disk fsyncs.
  151. - 0.5 in actually (k-1)/k which means that an additional roundtrip doesn't happen when
  152. the read sub-operation can be served locally.
  153. - Saturated parallel read iops: min(network bandwidth, sum(disk read iops)).
  154. - Saturated parallel write iops: min(network bandwidth, sum(disk write iops * number of data drives / (number of data + parity drives) / write amplification)).
  155. In fact, you should put disk write iops under the condition of ~10% reads / ~90% writes in this formula.
  156. Write amplification for 4 KB blocks is usually 3-5 in Vitastor:
  157. 1. Journal block write
  158. 2. Journal data write
  159. 3. Metadata block write
  160. 4. Another journal block write for EC/XOR setups
  161. 5. Data block write
  162. If you manage to get an SSD which handles 512 byte blocks well (Optane?) you may
  163. lower 1, 3 and 4 to 512 bytes (1/8 of data size) and get WA as low as 2.375.
  164. Lazy fsync also reduces WA for parallel workloads because journal blocks are only
  165. written when they fill up or fsync is requested.
  166. ## Example Comparison with Ceph
  167. Hardware configuration: 4 nodes, each with:
  168. - 6x SATA SSD Intel D3-4510 3.84 TB
  169. - 2x Xeon Gold 6242 (16 cores @ 2.8 GHz)
  170. - 384 GB RAM
  171. - 1x 25 GbE network interface (Mellanox ConnectX-4 LX), connected to a Juniper QFX5200 switch
  172. CPU powersaving was disabled. Both Vitastor and Ceph were configured with 2 OSDs per 1 SSD.
  173. All of the results below apply to 4 KB blocks and random access (unless indicated otherwise).
  174. Raw drive performance:
  175. - T1Q1 write ~27000 iops (~0.037ms latency)
  176. - T1Q1 read ~9800 iops (~0.101ms latency)
  177. - T1Q32 write ~60000 iops
  178. - T1Q32 read ~81700 iops
  179. Ceph 15.2.4 (Bluestore):
  180. - T1Q1 write ~1000 iops (~1ms latency)
  181. - T1Q1 read ~1750 iops (~0.57ms latency)
  182. - T8Q64 write ~100000 iops, total CPU usage by OSDs about 40 virtual cores on each node
  183. - T8Q64 read ~480000 iops, total CPU usage by OSDs about 40 virtual cores on each node
  184. T8Q64 tests were conducted over 8 400GB RBD images from all hosts (every host was running 2 instances of fio).
  185. This is because Ceph has performance penalties related to running multiple clients over a single RBD image.
  186. cephx_sign_messages was set to false during tests, RocksDB and Bluestore settings were left at defaults.
  187. In fact, not that bad for Ceph. These servers are an example of well-balanced Ceph nodes.
  188. However, CPU usage and I/O latency were through the roof, as usual.
  189. Vitastor:
  190. - T1Q1 write: 7087 iops (0.14ms latency)
  191. - T1Q1 read: 6838 iops (0.145ms latency)
  192. - T2Q64 write: 162000 iops, total CPU usage by OSDs about 3 virtual cores on each node
  193. - T8Q64 read: 895000 iops, total CPU usage by OSDs about 4 virtual cores on each node
  194. - Linear write (4M T1Q32): 2800 MB/s
  195. - Linear read (4M T1Q32): 1500 MB/s
  196. T8Q64 read test was conducted over 1 larger inode (3.2T) from all hosts (every host was running 2 instances of fio).
  197. Vitastor has no performance penalties related to running multiple clients over a single inode.
  198. If conducted from one node with all primary OSDs moved to other nodes the result was slightly lower (689000 iops),
  199. this is because all operations resulted in network roundtrips between the client and the primary OSD.
  200. When fio was colocated with OSDs (like in Ceph benchmarks above), 1/4 of the read workload actually
  201. used the loopback network.
  202. Vitastor was configured with: `--disable_data_fsync true --immediate_commit all --flusher_count 8
  203. --disk_alignment 4096 --journal_block_size 4096 --meta_block_size 4096
  204. --journal_no_same_sector_overwrites true --journal_sector_buffer_count 1024
  205. --journal_size 16777216`.
  206. ### EC/XOR 2+1
  207. Vitastor:
  208. - T1Q1 write: 2808 iops (~0.355ms latency)
  209. - T1Q1 read: 6190 iops (~0.16ms latency)
  210. - T2Q64 write: 85500 iops, total CPU usage by OSDs about 3.4 virtual cores on each node
  211. - T8Q64 read: 812000 iops, total CPU usage by OSDs about 4.7 virtual cores on each node
  212. - Linear write (4M T1Q32): 3200 MB/s
  213. - Linear read (4M T1Q32): 1800 MB/s
  214. Ceph:
  215. - T1Q1 write: 730 iops (~1.37ms latency)
  216. - T1Q1 read: 1500 iops with cold cache (~0.66ms latency), 2300 iops after 2 minute metadata cache warmup (~0.435ms latency)
  217. - T4Q128 write (4 RBD images): 45300 iops, total CPU usage by OSDs about 30 virtual cores on each node
  218. - T8Q64 read (4 RBD images): 278600 iops, total CPU usage by OSDs about 40 virtual cores on each node
  219. - Linear write (4M T1Q32): 1950 MB/s before preallocation, 2500 MB/s after preallocation
  220. - Linear read (4M T1Q32): 2400 MB/s
  221. ### NBD
  222. NBD is currently required to mount Vitastor via kernel, but it imposes additional overhead
  223. due to additional copying between the kernel and userspace. This mostly hurts linear
  224. bandwidth, not iops.
  225. Vitastor with single-thread NBD on the same hardware:
  226. - T1Q1 write: 6000 iops (0.166ms latency)
  227. - T1Q1 read: 5518 iops (0.18ms latency)
  228. - T1Q128 write: 94400 iops
  229. - T1Q128 read: 103000 iops
  230. - Linear write (4M T1Q128): 1266 MB/s (compared to 2800 MB/s via fio)
  231. - Linear read (4M T1Q128): 975 MB/s (compared to 1500 MB/s via fio)
  232. ## Installation
  233. ### Debian
  234. - Trust Vitastor package signing key:
  235. `wget -q -O - https://vitastor.io/debian/pubkey | sudo apt-key add -`
  236. - Add Vitastor package repository to your /etc/apt/sources.list:
  237. - Debian 11 (Bullseye/Sid): `deb https://vitastor.io/debian bullseye main`
  238. - Debian 10 (Buster): `deb https://vitastor.io/debian buster main`
  239. - For Debian 10 (Buster) also enable backports repository:
  240. `deb http://deb.debian.org/debian buster-backports main`
  241. - Install packages: `apt update; apt install vitastor lp-solve etcd linux-image-amd64`
  242. ### CentOS
  243. - Add Vitastor package repository:
  244. - CentOS 7: `yum install https://vitastor.io/rpms/centos/7/vitastor-release-1.0-1.el7.noarch.rpm`
  245. - CentOS 8: `dnf install https://vitastor.io/rpms/centos/8/vitastor-release-1.0-1.el8.noarch.rpm`
  246. - Enable EPEL: `yum/dnf install epel-release`
  247. - Enable additional CentOS repositories:
  248. - CentOS 7: `yum install centos-release-scl`
  249. - CentOS 8: `dnf install centos-release-advanced-virtualization`
  250. - Enable elrepo-kernel:
  251. - CentOS 7: `yum install https://www.elrepo.org/elrepo-release-7.el7.elrepo.noarch.rpm`
  252. - CentOS 8: `dnf install https://www.elrepo.org/elrepo-release-8.el8.elrepo.noarch.rpm`
  253. - Install packages: `yum/dnf install vitastor lpsolve etcd kernel-ml qemu-kvm`
  254. ### Building from Source
  255. - Install Linux kernel 5.4 or newer, for io_uring support. 5.8 or later is highly recommended because
  256. there is at least one known io_uring hang with 5.4 and an HP SmartArray controller.
  257. - Install liburing 0.4 or newer and its headers.
  258. - Install lp_solve.
  259. - Install etcd. Attention: you need a fixed version from here: https://github.com/vitalif/etcd/,
  260. branch release-3.4, because there is a bug in upstream etcd which makes Vitastor OSDs fail to
  261. move PGs out of "starting" state if you have at least around ~500 PGs or so. The custom build
  262. will be unnecessary when etcd merges the fix: https://github.com/etcd-io/etcd/pull/12402.
  263. - Install node.js 10 or newer.
  264. - Install gcc and g++ 8.x or newer.
  265. - Clone https://yourcmc.ru/git/vitalif/vitastor/ with submodules.
  266. - Install QEMU 3.0+, get its source, begin to build it, stop the build and copy headers:
  267. - `<qemu>/include` &rarr; `<vitastor>/qemu/include`
  268. - Debian:
  269. * Use qemu packages from the main repository
  270. * `<qemu>/b/qemu/config-host.h` &rarr; `<vitastor>/qemu/b/qemu/config-host.h`
  271. * `<qemu>/b/qemu/qapi` &rarr; `<vitastor>/qemu/b/qemu/qapi`
  272. - CentOS 8:
  273. * Use qemu packages from the Advanced-Virtualization repository. To enable it, run
  274. `yum install centos-release-advanced-virtualization.noarch` and then `yum install qemu`
  275. * `<qemu>/config-host.h` &rarr; `<vitastor>/qemu/b/qemu/config-host.h`
  276. * For QEMU 3.0+: `<qemu>/qapi` &rarr; `<vitastor>/qemu/b/qemu/qapi`
  277. * For QEMU 2.0+: `<qemu>/qapi-types.h` &rarr; `<vitastor>/qemu/b/qemu/qapi-types.h`
  278. - `config-host.h` and `qapi` are required because they contain generated headers
  279. - You can also rebuild QEMU with a patch that makes LD_PRELOAD unnecessary to load vitastor driver.
  280. See `qemu-*.*-vitastor.patch`.
  281. - Install fio 3.7 or later, get its source and symlink it into `<vitastor>/fio`.
  282. - Build Vitastor with `make -j8`.
  283. - Run `make install` (optionally with `LIBDIR=/usr/lib64 QEMU_PLUGINDIR=/usr/lib64/qemu-kvm`
  284. if you're using an RPM-based distro).
  285. ## Running
  286. Please note that startup procedure isn't currently simple - you specify configuration
  287. and calculate disk offsets almost by hand. This will be fixed in near future.
  288. - Get some SATA or NVMe SSDs with capacitors (server-grade drives). You can use desktop SSDs
  289. with lazy fsync, but prepare for inferior single-thread latency.
  290. - Get a fast network (at least 10 Gbit/s).
  291. - Disable CPU powersaving: `cpupower idle-set -D 0 && cpupower frequency-set -g performance`.
  292. - Start etcd with `--max-txn-ops=100000 --auto-compaction-retention=10 --auto-compaction-mode=revision` options.
  293. - Create global configuration in etcd: `etcdctl --endpoints=... put /vitastor/config/global '{"immediate_commit":"all"}'`
  294. (if all your drives have capacitors).
  295. - Create pool configuration in etcd: `etcdctl --endpoints=... put /vitastor/config/pools '{"1":{"name":"testpool","scheme":"replicated","pg_size":2,"pg_minsize":1,"pg_count":256,"failure_domain":"host"}}'`.
  296. For jerasure pools the configuration should look like the following: `2:{"name":"ecpool","scheme":"jerasure","pg_size":4,"parity_chunks":2,"pg_minsize":2,"pg_count":256,"failure_domain":"host"}`.
  297. - Calculate offsets for your drives with `node /usr/lib/vitastor/mon/simple-offsets.js --device /dev/sdX`.
  298. - Make systemd units for your OSDs. Look at `/usr/lib/vitastor/mon/make-units.sh` for example.
  299. Notable configuration variables from the example:
  300. - `disable_data_fsync 1` - only safe with server-grade drives with capacitors.
  301. - `immediate_commit all` - use this if all your drives are server-grade.
  302. - `disable_device_lock 1` - only required if you run multiple OSDs on one block device.
  303. - `flusher_count 16` - flusher is a micro-thread that removes old data from the journal.
  304. More flushers mean more aggressive journal flushing which allows for more throughput
  305. but slightly hurts latency under less load. Flushing will probably be improved in the future
  306. because currently high queue depths sometimes lead to performance degradation.
  307. - `disk_alignment`, `journal_block_size`, `meta_block_size` should be set to the internal
  308. block size of your SSDs which is 4096 on most drives.
  309. - `journal_no_same_sector_overwrites true` prevents multiple overwrites of the same journal sector.
  310. Most (99%) SSDs don't need this option. But Intel D3-4510 does because it doesn't like when you
  311. overwrite the same sector twice in a short period of time. The setting forces Vitastor to never
  312. overwrite the same journal sector twice in a row which makes D3-4510 almost happy. Not totally
  313. happy, because overwrites of the same block can still happen in the metadata area... When this
  314. setting is set, it is also required to raise `journal_sector_buffer_count` setting, which is the
  315. number of dirty journal sectors that may be written to at the same time.
  316. - `systemctl start vitastor.target` everywhere.
  317. - Start any number of monitors: `node /usr/lib/vitastor/mon/mon-main.js --etcd_url 'http://10.115.0.10:2379,http://10.115.0.11:2379,http://10.115.0.12:2379,http://10.115.0.13:2379' --etcd_prefix '/vitastor' --etcd_start_timeout 5`.
  318. - At this point, one of the monitors will configure PGs and OSDs will start them.
  319. - You can check PG states with `etcdctl --endpoints=... get --prefix /vitastor/pg/state`. All PGs should become 'active'.
  320. - Run tests with (for example): `fio -thread -ioengine=/usr/lib/x86_64-linux-gnu/vitastor/libfio_cluster.so -name=test -bs=4M -direct=1 -iodepth=16 -rw=write -etcd=10.115.0.10:2379/v3 -pool=1 -inode=1 -size=400G`.
  321. - Upload VM disk image with qemu-img (for example):
  322. ```
  323. LD_PRELOAD=/usr/lib/x86_64-linux-gnu/qemu/block-vitastor.so qemu-img convert -f qcow2 debian10.qcow2 -p
  324. -O raw 'vitastor:etcd_host=10.115.0.10\:2379/v3:pool=1:inode=1:size=2147483648'
  325. ```
  326. - Run QEMU with (for example):
  327. ```
  328. LD_PRELOAD=/usr/lib/x86_64-linux-gnu/qemu/block-vitastor.so qemu-system-x86_64 -enable-kvm -m 1024
  329. -drive 'file=vitastor:etcd_host=10.115.0.10\:2379/v3:pool=1:inode=1:size=2147483648',format=raw,if=none,id=drive-virtio-disk0,cache=none
  330. -device virtio-blk-pci,scsi=off,bus=pci.0,addr=0x5,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1,write-cache=off,physical_block_size=4096,logical_block_size=512
  331. -vnc 0.0.0.0:0
  332. ```
  333. - Remove inode with (for example):
  334. ```
  335. vitastor-rm --etcd_address 10.115.0.10:2379/v3 --pool 1 --inode 1 --parallel_osds 16 --iodepth 32
  336. ```
  337. ## Known Problems
  338. - Object deletion requests may currently lead to 'incomplete' objects if your OSDs crash during
  339. deletion because proper handling of object cleanup in a cluster should be "three-phase"
  340. and it's currently not implemented. Just to repeat the removal again in this case.
  341. ## Implementation Principles
  342. - I like simple and stupid solutions, so expect Vitastor to stay simple.
  343. - I also like reinventing the wheel to some extent, like writing my own HTTP client
  344. for etcd interaction instead of using prebuilt libraries, because in this case
  345. I'm confident about what my code does and what it doesn't do.
  346. - I don't care about C++ "best practices" like RAII or proper inheritance or usage of
  347. smart pointers or whatever and I don't intend to change my mind, so if you're here
  348. looking for ideal reference C++ code, this probably isn't the right place.
  349. - I like node.js better than any other dynamically-typed language interpreter
  350. because it's faster than any other interpreter in the world, has neutral C-like
  351. syntax and built-in event loop. That's why Monitor is implemented in node.js.
  352. ## Author and License
  353. Copyright (c) Vitaliy Filippov (vitalif [at] yourcmc.ru), 2019+
  354. You can also find me in the Russian Telegram Ceph chat: https://t.me/ceph_ru
  355. All server-side code (OSD, Monitor and so on) is licensed under the terms of
  356. Vitastor Network Public License 1.0 (VNPL 1.0), a copyleft license based on
  357. GNU GPLv3.0 with the additional "Network Interaction" clause which requires
  358. opensourcing all programs directly or indirectly interacting with Vitastor
  359. through a computer network ("Proxy Programs"). Proxy Programs may be made public
  360. not only under the terms of the same license, but also under the terms of any
  361. GPL-Compatible Free Software License, as listed by the Free Software Foundation.
  362. This is a stricter copyleft license than the Affero GPL.
  363. Basically, you can't use the software in a proprietary environment to provide
  364. its functionality to users without opensourcing all intermediary components
  365. standing between the user and Vitastor or purchasing a commercial license
  366. from the author 😀.
  367. Client libraries (cluster_client and so on) are dual-licensed under the same
  368. VNPL 1.0 and also GNU GPL 2.0 or later to allow for compatibility with GPLed
  369. software like QEMU and fio.
  370. You can find the full text of VNPL-1.0 in the file [VNPL-1.0.txt](VNPL-1.0.txt).
  371. GPL 2.0 is also included in this repository as [GPL-2.0.txt](GPL-2.0.txt).