Spelling fixes (found by codespell)

master
Karsten Weiss 2020-07-03 09:09:40 +02:00 committed by Karsten Weiss
parent 728fa1e0f9
commit f280123d0b
20 changed files with 46 additions and 46 deletions

View File

@ -20,12 +20,12 @@ install:
# TODO: Not in repos for 14.04 trustz but comes 16.04 xenial
#- sudo apt-get install -y libpnetcdf-dev pnetcdf-bin
# Install HDFS
# TODO: Not sure with which c libray hdfs should be used and if it is in
# TODO: Not sure with which c library hdfs should be used and if it is in
# the ubuntu repos
# Probably hadoop needs to be installed an provides native API.
# Install Amazon S3
# TODO: The needed library needs to be installed. Follow the instructions in
# aiori-S3.c to achive this.
# aiori-S3.c to achieve this.
# GPFS
# NOTE: Think GPFS need a license and is therefore not testable with travis.
script:

4
NEWS
View File

@ -133,7 +133,7 @@ Version 2.10.3
Contributed by demyn@users.sourceforge.net
- Ported to Windows. Required changes related to 'long' types, which on Windows
are always 32-bits, even on 64-bit systems. Missing system headers and
functions acount for most of the remaining changes.
functions account for most of the remaining changes.
New files for Windows:
- IOR/ior.vcproj - Visual C project file
- IOR/src/C/win/getopt.{h,c} - GNU getopt() support
@ -193,7 +193,7 @@ Version 2.9.5
- Added notification for "Using reorderTasks '-C' (expecting block, not cyclic,
task assignment)"
- Corrected bug with read performance with stonewalling (was using full size,
stat'ed file instead of bytes transfered).
stat'ed file instead of bytes transferred).
Version 2.9.4
--------------------------------------------------------------------------------

View File

@ -1,8 +1,8 @@
# HPC IO Benchmark Repository [![Build Status](https://travis-ci.org/hpc/ior.svg?branch=master)](https://travis-ci.org/hpc/ior)
This repository contains the IOR and mdtest parallel I/O benchmarks. The
[official IOR/mdtest documention][] can be found in the `docs/` subdirectory or
on Read the Docs.
[official IOR/mdtest documentation][] can be found in the `docs/` subdirectory
or on Read the Docs.
## Building
@ -28,4 +28,4 @@ on Read the Docs.
distributions at once.
[official IOR release]: https://github.com/hpc/ior/releases
[official IOR/mdtest documention]: http://ior.readthedocs.org/
[official IOR/mdtest documentation]: http://ior.readthedocs.org/

View File

@ -40,7 +40,7 @@ Required Options:
Optional Options:
--daos.group <group_name>: group name of servers with the pool
--daos.chunk_size <chunk_size>: Chunk size of the array object controlling striping over DKEYs
--daos.destroy flag to destory the container on finalize
--daos.destroy flag to destroy the container on finalize
--daos.oclass <object_class>: specific object class for array object
Examples that should work include:
@ -66,7 +66,7 @@ Required Options:
Optional Options:
--dfs.group <group_name>: group name of servers with the pool
--dfs.chunk_size <chunk_size>: Chunk size of the files
--dfs.destroy flag to destory the container on finalize
--dfs.destroy flag to destroy the container on finalize
--dfs.oclass <object_class>: specific object class for files
In the IOR options, the file name should be specified on the root dir directly

View File

@ -47,7 +47,7 @@ Two ways to run IOR:
E.g., to execute: IOR -W -f script
This defaults all tests in 'script' to use write data checking.
* The Command line supports to specify additional parameters for the choosen API.
* The Command line supports to specify additional parameters for the chosen API.
For example, username and password for the storage.
Available options are listed in the help text after selecting the API when running with -h.
For example, 'IOR -a DUMMY -h' shows the supported options for the DUMMY backend.
@ -361,7 +361,7 @@ GPFS-SPECIFIC:
* gpfsReleaseToken - immediately after opening or creating file, release
all locks. Might help mitigate lock-revocation
traffic when many proceses write/read to same file.
traffic when many processes write/read to same file.
BeeGFS-SPECIFIC (POSIX only):
================
@ -499,7 +499,7 @@ zip, gzip, and bzip.
3) bzip2: For bziped files a transfer size of 1k is insufficient (~50% compressed).
To avoid compression a transfer size of greater than the bzip block size is required
(default = 900KB). I suggest a transfer size of greather than 1MB to avoid bzip2 compression.
(default = 900KB). I suggest a transfer size of greater than 1MB to avoid bzip2 compression.
Be aware of the block size your compression algorithm will look at, and adjust the transfer size
accordingly.
@ -660,7 +660,7 @@ HOW DO I USE HINTS?
'setenv IOR_HINT__MPI__<hint> <value>'
HOW DO I EXPLICITY SET THE FILE DATA SIGNATURE?
HOW DO I EXPLICITLY SET THE FILE DATA SIGNATURE?
The data signature for a transfer contains the MPI task number, transfer-
buffer offset, and also timestamp for the start of iteration. As IOR works

View File

@ -28,7 +28,7 @@ Use ``collective creates'', meaning task 0 does all the creates.
Only perform the create phase of the tests.
.TP
.I "-d" testdir[@testdir2]
The directory in which the tests will run. For multiple pathes, must use fully-qualified pathnames.
The directory in which the tests will run. For multiple paths, must use fully-qualified pathnames.
[default: working directory of mdtest].
.TP
.I "-D"

View File

@ -146,7 +146,7 @@ HOW DO I USE HINTS?
'setenv IOR_HINT__MPI__<hint> <value>'
HOW DO I EXPLICITY SET THE FILE DATA SIGNATURE?
HOW DO I EXPLICITLY SET THE FILE DATA SIGNATURE?
The data signature for a transfer contains the MPI task number, transfer-
buffer offset, and also timestamp for the start of iteration. As IOR works

View File

@ -302,7 +302,7 @@ GPFS-SPECIFIC
* ``gpfsReleaseToken`` - release all locks immediately after opening or
creating file. Might help mitigate lock-revocation traffic when many
proceses write/read to same file. (default: 0)
processes write/read to same file. (default: 0)
Verbosity levels
----------------
@ -338,7 +338,7 @@ bzip.
3) bzip2: For bziped files a transfer size of 1k is insufficient (~50% compressed).
To avoid compression a transfer size of greater than the bzip block size is required
(default = 900KB). I suggest a transfer size of greather than 1MB to avoid bzip2 compression.
(default = 900KB). I suggest a transfer size of greater than 1MB to avoid bzip2 compression.
Be aware of the block size your compression algorithm will look at, and adjust
the transfer size accordingly.

View File

@ -4,7 +4,7 @@ First Steps with IOR
====================
This is a short tutorial for the basic usage of IOR and some tips on how to use
IOR to handel caching effects as these are very likely to affect your
IOR to handle caching effects as these are very likely to affect your
measurements.
Running IOR

View File

@ -514,7 +514,7 @@ DFS_Finalize(aiori_mod_opt_t *options)
uuid_t uuid;
double t1, t2;
INFO(VERBOSE_1, "Destorying DFS Container: %s\n", o->cont);
INFO(VERBOSE_1, "Destroying DFS Container: %s\n", o->cont);
uuid_parse(o->cont, uuid);
t1 = MPI_Wtime();
rc = daos_cont_destroy(poh, uuid, 1, NULL);
@ -561,7 +561,7 @@ DFS_Finalize(aiori_mod_opt_t *options)
}
/*
* Creat and open a file through the DFS interface.
* Create and open a file through the DFS interface.
*/
static aiori_fd_t *
DFS_Create(char *testFileName, int flags, aiori_mod_opt_t *param)

View File

@ -149,7 +149,7 @@ static int IME_Access(const char *path, int mode, IOR_param_t *param)
}
/*
* Creat and open a file through the IME interface.
* Create and open a file through the IME interface.
*/
static void *IME_Create(char *testFileName, IOR_param_t *param)
{

View File

@ -128,7 +128,7 @@ static void ior_mmap_file(int *file, int mflags, void *param)
}
/*
* Creat and open a file through the POSIX interface, then setup mmap.
* Create and open a file through the POSIX interface, then setup mmap.
*/
static aiori_fd_t *MMAP_Create(char *testFileName, int flags, aiori_mod_opt_t * param)
{

View File

@ -368,7 +368,7 @@ bool beegfs_createFilePath(char* filepath, mode_t mode, int numTargets, int chun
/*
* Creat and open a file through the POSIX interface.
* Create and open a file through the POSIX interface.
*/
aiori_fd_t *POSIX_Create(char *testFileName, int flags, aiori_mod_opt_t * param)
{
@ -394,9 +394,9 @@ aiori_fd_t *POSIX_Create(char *testFileName, int flags, aiori_mod_opt_t * param)
#define FASYNC 00020000 /* fcntl, for BSD compatibility */
#endif
if (o->lustre_set_striping) {
/* In the single-shared-file case, task 0 has to creat the
file with the Lustre striping options before any other processes
open the file */
/* In the single-shared-file case, task 0 has to create the
file with the Lustre striping options before any other
processes open the file */
if (!hints->filePerProc && rank != 0) {
MPI_CHECK(MPI_Barrier(testComm), "barrier error");
fd_oflag |= O_RDWR;
@ -485,7 +485,7 @@ aiori_fd_t *POSIX_Create(char *testFileName, int flags, aiori_mod_opt_t * param)
}
/*
* Creat a file through mknod interface.
* Create a file through mknod interface.
*/
int POSIX_Mknod(char *testFileName)
{

View File

@ -126,7 +126,7 @@ const char* bucket_name = "ior";
/* TODO: The following stuff goes into options! */
/* REST/S3 variables */
// CURL* curl; /* for libcurl "easy" fns (now managed by aws4c) */
# define IOR_CURL_INIT 0x01 /* curl top-level inits were perfomed once? */
# define IOR_CURL_INIT 0x01 /* curl top-level inits were performed once? */
# define IOR_CURL_NOCONTINUE 0x02
# define IOR_CURL_S3_EMC_EXT 0x04 /* allow EMC extensions to S3? */
@ -286,7 +286,7 @@ static int S3_check_params(IOR_param_t * test){
* NOTE: Our custom version of aws4c can be configured so that connections
* are reused, instead of opened and closed on every operation. We
* do configure it that way, but you still need to call these
* connect/disconnet functions, in order to insure that aws4c has
* connect/disconnect functions, in order to insure that aws4c has
* been configured.
* ---------------------------------------------------------------------------
*/
@ -322,7 +322,7 @@ static void s3_connect( IOR_param_t* param ) {
aws_read_config(getenv("USER")); // requires ~/.awsAuth
aws_reuse_connections(1);
// initalize IOBufs. These are basically dynamically-extensible
// initialize IOBufs. These are basically dynamically-extensible
// linked-lists. "growth size" controls the increment of new memory
// allocated, whenever storage is used up.
param->io_buf = aws_iobuf_new();
@ -714,7 +714,7 @@ EMC_Open( char *testFileName, IOR_param_t * param ) {
* impose two scaling problems: (1) requires all ETags to be shipped at
* the BW available to a single process, (1) requires either that they
* all fit into memory of a single process, or be written to disk
* (imposes additional BW contraints), or make a more-complex
* (imposes additional BW constraints), or make a more-complex
* interaction with a threaded curl writefunction, to present the
* appearance of a single thread to curl, whilst allowing streaming
* reception of non-local ETags.
@ -777,7 +777,7 @@ S3_Xfer_internal(int access,
//
// In the N:1 case, the global order of part-numbers we're writing
// depends on whether wer're writing strided or segmented, in
// other words, how <offset> and <remaining> are acutally
// other words, how <offset> and <remaining> are actually
// positioning the parts being written. [See discussion at
// S3_Close_internal().]
//
@ -1014,7 +1014,7 @@ S3_Fsync( void *fd, IOR_param_t * param ) {
*
* ISSUE: The S3 spec says that a multi-part upload can have at most 10,000
* parts. Does EMC allow more than this? (NOTE the spec also says
* parts must be at leaast 5MB, but EMC definitely allows smaller
* parts must be at least 5MB, but EMC definitely allows smaller
* parts than that.)
*
* ISSUE: All Etags must be sent from a single rank, in a single
@ -1126,7 +1126,7 @@ S3_Close_internal( void* fd,
// add XML for *all* the parts. The XML must be ordered by
// part-number. Each rank wrote <etags_per_rank> parts,
// locally. At rank0, the etags for each rank are now
// stored as a continguous block of text, with the blocks
// stored as a contiguous block of text, with the blocks
// stored in rank order in etag_vec. In other words, our
// internal rep at rank 0 matches the "segmented" format.
// From this, we must select etags in an order matching how

View File

@ -641,9 +641,9 @@ FillBuffer(void *buffer,
unsigned long long hi, lo;
unsigned long long *buf = (unsigned long long *)buffer;
if(test->dataPacketType == incompressible ) { /* Make for some non compressable buffers with randomish data */
if(test->dataPacketType == incompressible ) { /* Make for some non compressible buffers with randomish data */
/* In order for write checks to work, we have to restart the psuedo random sequence */
/* In order for write checks to work, we have to restart the pseudo random sequence */
if(reseed_incompressible_prng == TRUE) {
test->incompressibleSeed = test->setTimeStampSignature + rank; /* We copied seed into timestampSignature at initialization, also add the rank to add randomness between processes */
reseed_incompressible_prng = FALSE;
@ -1637,7 +1637,7 @@ static void ValidateTests(IOR_param_t * test)
&& (strcasecmp(test->api, "CEPHFS") != 0)) && test->fsync)
WARN_RESET("fsync() not supported in selected backend",
test, &defaults, fsync);
/* parameter consitency */
/* parameter consistency */
if (test->reorderTasks == TRUE && test->reorderTasksRandom == TRUE)
ERR("Both Constant and Random task re-ordering specified. Choose one and resubmit");
if (test->randomOffset && test->reorderTasksRandom
@ -1672,7 +1672,7 @@ static void ValidateTests(IOR_param_t * test)
* Returns a precomputed array of IOR_offset_t for the inner benchmark loop.
* They are sequential and the last element is set to -1 as end marker.
* @param test IOR_param_t for getting transferSize, blocksize and SegmentCount
* @param pretendRank int pretended Rank for shifting the offsest corectly
* @param pretendRank int pretended Rank for shifting the offsets correctly
* @return IOR_offset_t
*/
IOR_offset_t *GetOffsetArraySequential(IOR_param_t * test, int pretendRank)
@ -1720,7 +1720,7 @@ IOR_offset_t *GetOffsetArraySequential(IOR_param_t * test, int pretendRank)
* diversion in accesse as it dose with filePerProc. This is expected but
* should be mined.
* @param test IOR_param_t for getting transferSize, blocksize and SegmentCount
* @param pretendRank int pretended Rank for shifting the offsest corectly
* @param pretendRank int pretended Rank for shifting the offsets correctly
* @return IOR_offset_t
* @return
*/

View File

@ -127,7 +127,7 @@ typedef struct
int useExistingTestFile; /* do not delete test file before access */
int storeFileOffset; /* use file offset as stored signature */
int deadlineForStonewalling; /* max time in seconds to run any test phase */
int stoneWallingWearOut; /* wear out the stonewalling, once the timout is over, each process has to write the same amount */
int stoneWallingWearOut; /* wear out the stonewalling, once the timeout is over, each process has to write the same amount */
uint64_t stoneWallingWearOutIterations; /* the number of iterations for the stonewallingWearOut, needed for readBack */
char * stoneWallingStatusFile;

View File

@ -492,7 +492,7 @@ void collective_helper(const int dirs, const int create, const char* path, uint6
progress->items_done = progress->items_per_dir;
}
/* recusive function to create and remove files/directories from the
/* recursive function to create and remove files/directories from the
directory tree */
void create_remove_items(int currDepth, const int dirs, const int create, const int collective, const char *path, uint64_t dirNum, rank_progress_t * progress) {
unsigned i;

View File

@ -282,7 +282,7 @@ int contains_only(char *haystack, char *needle)
/* check for "needle" */
if (strncasecmp(ptr, needle, strlen(needle)) != 0)
return 0;
/* make sure the rest of the line is only whitspace as well */
/* make sure the rest of the line is only whitespace as well */
for (ptr += strlen(needle); ptr < end; ptr++) {
if (!isspace(*ptr))
return 0;
@ -395,7 +395,7 @@ option_help * createGlobalOptions(IOR_param_t * params){
{'C', NULL, "reorderTasks -- changes task ordering for readback (useful to avoid client cache)", OPTION_FLAG, 'd', & params->reorderTasks},
{'d', NULL, "interTestDelay -- delay between reps in seconds", OPTION_OPTIONAL_ARGUMENT, 'd', & params->interTestDelay},
{'D', NULL, "deadlineForStonewalling -- seconds before stopping write or read phase", OPTION_OPTIONAL_ARGUMENT, 'd', & params->deadlineForStonewalling},
{.help=" -O stoneWallingWearOut=1 -- once the stonewalling timout is over, all process finish to access the amount of data", .arg = OPTION_OPTIONAL_ARGUMENT},
{.help=" -O stoneWallingWearOut=1 -- once the stonewalling timeout is over, all process finish to access the amount of data", .arg = OPTION_OPTIONAL_ARGUMENT},
{.help=" -O stoneWallingWearOutIterations=N -- stop after processing this number of iterations, needed for reading data back written with stoneWallingWearOut", .arg = OPTION_OPTIONAL_ARGUMENT},
{.help=" -O stoneWallingStatusFile=FILE -- this file keeps the number of iterations from stonewalling during write and allows to use them for read", .arg = OPTION_OPTIONAL_ARGUMENT},
{'e', NULL, "fsync -- perform a fsync() operation at the end of each read/write phase", OPTION_FLAG, 'd', & params->fsync},
@ -436,7 +436,7 @@ option_help * createGlobalOptions(IOR_param_t * params){
{'Z', NULL, "reorderTasksRandom -- changes task ordering to random ordering for readback", OPTION_FLAG, 'd', & params->reorderTasksRandom},
{0, "warningAsErrors", "Any warning should lead to an error.", OPTION_FLAG, 'd', & params->warningAsErrors},
{.help=" -O summaryFile=FILE -- store result data into this file", .arg = OPTION_OPTIONAL_ARGUMENT},
{.help=" -O summaryFormat=[default,JSON,CSV] -- use the format for outputing the summary", .arg = OPTION_OPTIONAL_ARGUMENT},
{.help=" -O summaryFormat=[default,JSON,CSV] -- use the format for outputting the summary", .arg = OPTION_OPTIONAL_ARGUMENT},
{0, "dryRun", "do not perform any I/Os just run evtl. inputs print dummy output", OPTION_FLAG, 'd', & params->dryRun},
LAST_OPTION,
};

View File

@ -7,7 +7,7 @@ Following are basic notes on how to deploy the 'ceph/demo' docker container. The
Run `docker pull ceph/demo` to download the image to your system.
################################
# Deploy 'ceph/demo' conatiner #
# Deploy 'ceph/demo' container #
################################
To deploy the Ceph cluster, execute the following command:

View File

@ -46,7 +46,7 @@ for IMAGE in $(find -type d | cut -b 3- |grep -v "^$") ; do
done
if [[ $ERROR != 0 ]] ; then
echo "Errors occured!"
echo "Errors occurred!"
else
echo "OK: all tests passed!"
fi