Redis: remvoe redis-3.2.8.

dev
fengbojiang 2019-11-23 11:08:39 +08:00
parent d0ec86faa4
commit 14cef497d0
566 changed files with 3 additions and 195965 deletions

View File

@ -99,7 +99,7 @@ for more details, see [nginx guide](https://github.com/F-Stack/f-stack/blob/mast
#### Redis
cd app/redis-3.2.8/
cd app/redis-5.0.5/
make
make install

View File

@ -1,29 +0,0 @@
.*.swp
*.o
*.log
dump.rdb
redis-benchmark
redis-check-aof
redis-check-rdb
redis-check-dump
redis-cli
redis-sentinel
redis-server
doc-tools
release
misc/*
src/release.h
appendonly.aof
SHORT_TERM_TODO
release.h
src/transfer.sh
src/configs
redis.ds
src/redis.conf
src/nodes.conf
deps/lua/src/lua
deps/lua/src/luac
deps/lua/src/liblua.a
.make-*
.prerequisites
*.dSYM

File diff suppressed because it is too large Load Diff

View File

@ -1 +0,0 @@
Please check https://github.com/antirez/redis/issues

View File

@ -1,38 +0,0 @@
Note: by contributing code to the Redis project in any form, including sending
a pull request via Github, a code fragment or patch via private email or
public discussion groups, you agree to release your code under the terms
of the BSD license that you can find in the COPYING file included in the Redis
source distribution. You will include BSD license in the COPYING file within
each source file that you contribute.
# IMPORTANT: HOW TO USE REDIS GITHUB ISSUES
* Github issues SHOULD ONLY BE USED to report bugs, and for DETAILED feature
requests. Everything else belongs to the Redis Google Group.
PLEASE DO NOT POST GENERAL QUESTIONS that are not about bugs or suspected
bugs in the Github issues system. We'll be very happy to help you and provide
all the support Reddit sub:
http://reddit.com/r/redis
There is also an active community of Redis users at Stack Overflow:
http://stackoverflow.com/questions/tagged/redis
# How to provide a patch for a new feature
1. If it is a major feature or a semantical change, please post it as a new submission in r/redis on Reddit at http://reddit.com/r/redis. Try to be passionate about why the feature is needed, make users upvote your proposal to gain traction and so forth. Read feedbacks about the community. But in this first step **please don't write code yet**.
2. If in step 1 you get an acknowledge from the project leaders, use the
following procedure to submit a patch:
a. Fork Redis on github ( http://help.github.com/fork-a-repo/ )
b. Create a topic branch (git checkout -b my_branch)
c. Push to your branch (git push origin my_branch)
d. Initiate a pull request on github ( http://help.github.com/send-pull-requests/ )
e. Done :)
For minor fixes just open a pull request on Github.
Thanks!

View File

@ -1,10 +0,0 @@
Copyright (c) 2006-2015, Salvatore Sanfilippo
All rights reserved.
Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met:
* Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer.
* Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution.
* Neither the name of Redis nor the names of its contributors may be used to endorse or promote products derived from this software without specific prior written permission.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.

View File

@ -1 +0,0 @@
See README

View File

@ -1,67 +0,0 @@
[Note: this is the Redis manifesto, for general information about
installing and running Redis read the README file instead.]
Redis Manifesto
===============
1 - A DSL for Abstract Data Types. Redis is a DSL (Domain Specific Language)
that manipulates abstract data types and implemented as a TCP daemon.
Commands manipulate a key space where keys are binary-safe strings and
values are different kinds of abstract data types. Every data type
represents an abstract version of a fundamental data structure. For instance
Redis Lists are an abstract representation of linked lists. In Redis, the
essence of a data type isn't just the kind of operations that the data types
support, but also the space and time complexity of the data type and the
operations performed upon it.
2 - Memory storage is #1. The Redis data set, composed of defined key-value
pairs, is primarily stored in the computer's memory. The amount of memory in
all kinds of computers, including entry-level servers, is increasing
significantly each year. Memory is fast, and allows Redis to have very
predictable performance. Datasets composed of 10k or 40 millions keys will
perform similarly. Complex data types like Redis Sorted Sets are easy to
implement and manipulate in memory with good performance, making Redis very
simple. Redis will continue to explore alternative options (where data can
be optionally stored on disk, say) but the main goal of the project remains
the development of an in-memory database.
3 - Fundamental data structures for a fundamental API. The Redis API is a direct
consequence of fundamental data structures. APIs can often be arbitrary but
not an API that resembles the nature of fundamental data structures. If we
ever meet intelligent life forms from another part of the universe, they'll
likely know, understand and recognize the same basic data structures we have
in our computer science books. Redis will avoid intermediate layers in API,
so that the complexity is obvious and more complex operations can be
performed as the sum of the basic operations.
4 - Code is like a poem; it's not just something we write to reach some
practical result. Sometimes people that are far from the Redis philosophy
suggest using other code written by other authors (frequently in other
languages) in order to implement something Redis currently lacks. But to us
this is like if Shakespeare decided to end Enrico IV using the Paradiso from
the Divina Commedia. Is using any external code a bad idea? Not at all. Like
in "One Thousand and One Nights" smaller self contained stories are embedded
in a bigger story, we'll be happy to use beautiful self contained libraries
when needed. At the same time, when writing the Redis story we're trying to
write smaller stories that will fit in to other code.
5 - We're against complexity. We believe designing systems is a fight against
complexity. We'll accept to fight the complexity when it's worthwhile but
we'll try hard to recognize when a small feature is not worth 1000s of lines
of code. Most of the time the best way to fight complexity is by not
creating it at all.
6 - Two levels of API. The Redis API has two levels: 1) a subset of the API fits
naturally into a distributed version of Redis and 2) a more complex API that
supports multi-key operations. Both are useful if used judiciously but
there's no way to make the more complex multi-keys API distributed in an
opaque way without violating our other principles. We don't want to provide
the illusion of something that will work magically when actually it can't in
all cases. Instead we'll provide commands to quickly migrate keys from one
instance to another to perform multi-key operations and expose the tradeoffs
to the user.
7 - We optimize for joy. We believe writing code is a lot of hard work, and the
only way it can be worth is by enjoying it. When there is no longer joy in
writing code, the best thing to do is stop. To prevent this, we'll avoid
taking paths that will make Redis less of a joy to develop.

View File

@ -1,11 +0,0 @@
# Top level makefile, the real shit is at src/Makefile
default: all
.DEFAULT:
cd src && $(MAKE) $@
install:
cd src && $(MAKE) $@
.PHONY: install

View File

@ -1,191 +0,0 @@
This README is just a fast *quick start* document. You can find more detailed documentation at http://redis.io.
What is Redis?
--------------
Redis is often referred as a *data structures* server. What this means is that Redis provides access to mutable data structures via a set of commands, which are sent using a *server-client* model with TCP sockets and a simple protocol. So different processes can query and modify the same data structures in a shared way.
Data structures implemented into Redis have a few special properties:
* Redis cares to store them on disk, even if they are always served and modified into the server memory. This means that Redis is fast, but that is also non-volatile.
* Implementation of data structures stress on memory efficiency, so data structures inside Redis will likely use less memory compared to the same data structure modeled using an high level programming language.
* Redis offers a number of features that are natural to find in a database, like replication, tunable levels of durability, cluster, high availability.
Another good example is to think of Redis as a more complex version of memcached, where the operations are not just SETs and GETs, but operations to work with complex data types like Lists, Sets, ordered data structures, and so forth.
If you want to know more, this is a list of selected starting points:
* Introduction to Redis data types. http://redis.io/topics/data-types-intro
* Try Redis directly inside your browser. http://try.redis.io
* The full list of Redis commands. http://redis.io/commands
* There is much more inside the Redis official documentation. http://redis.io/documentation
Building Redis
--------------
Redis can be compiled and used on Linux, OSX, OpenBSD, NetBSD, FreeBSD.
We support big endian and little endian architectures, and both 32 bit
and 64 bit systems.
It may compile on Solaris derived systems (for instance SmartOS) but our
support for this platform is *best effort* and Redis is not guaranteed to
work as well as in Linux, OSX, and \*BSD there.
It is as simple as:
% make
You can run a 32 bit Redis binary using:
% make 32bit
After building Redis is a good idea to test it, using:
% make test
Fixing build problems with dependencies or cached build options
---------
Redis has some dependencies which are included into the `deps` directory.
`make` does not rebuild dependencies automatically, even if something in the
source code of dependencies is changed.
When you update the source code with `git pull` or when code inside the
dependencies tree is modified in any other way, make sure to use the following
command in order to really clean everything and rebuild from scratch:
make distclean
This will clean: jemalloc, lua, hiredis, linenoise.
Also if you force certain build options like 32bit target, no C compiler
optimizations (for debugging purposes), and other similar build time options,
those options are cached indefinitely until you issue a `make distclean`
command.
Fixing problems building 32 bit binaries
---------
If after building Redis with a 32 bit target you need to rebuild it
with a 64 bit target, or the other way around, you need to perform a
`make distclean` in the root directory of the Redis distribution.
In case of build errors when trying to build a 32 bit binary of Redis, try
the following steps:
* Install the packages libc6-dev-i386 (also try g++-multilib).
* Try using the following command line instead of `make 32bit`:
`make CFLAGS="-m32 -march=native" LDFLAGS="-m32"`
Allocator
---------
Selecting a non-default memory allocator when building Redis is done by setting
the `MALLOC` environment variable. Redis is compiled and linked against libc
malloc by default, with the exception of jemalloc being the default on Linux
systems. This default was picked because jemalloc has proven to have fewer
fragmentation problems than libc malloc.
To force compiling against libc malloc, use:
% make MALLOC=libc
To compile against jemalloc on Mac OS X systems, use:
% make MALLOC=jemalloc
Verbose build
-------------
Redis will build with a user friendly colorized output by default.
If you want to see a more verbose output use the following:
% make V=1
Running Redis
-------------
To run Redis with the default configuration just type:
% cd src
% ./redis-server
If you want to provide your redis.conf, you have to run it using an additional
parameter (the path of the configuration file):
% cd src
% ./redis-server /path/to/redis.conf
It is possible to alter the Redis configuration passing parameters directly
as options using the command line. Examples:
% ./redis-server --port 9999 --slaveof 127.0.0.1 6379
% ./redis-server /etc/redis/6379.conf --loglevel debug
All the options in redis.conf are also supported as options using the command
line, with exactly the same name.
Playing with Redis
------------------
You can use redis-cli to play with Redis. Start a redis-server instance,
then in another terminal try the following:
% cd src
% ./redis-cli
redis> ping
PONG
redis> set foo bar
OK
redis> get foo
"bar"
redis> incr mycounter
(integer) 1
redis> incr mycounter
(integer) 2
redis>
You can find the list of all the available commands at http://redis.io/commands.
Installing Redis
-----------------
In order to install Redis binaries into /usr/local/bin just use:
% make install
You can use `make PREFIX=/some/other/directory install` if you wish to use a
different destination.
Make install will just install binaries in your system, but will not configure
init scripts and configuration files in the appropriate place. This is not
needed if you want just to play a bit with Redis, but if you are installing
it the proper way for a production system, we have a script doing this
for Ubuntu and Debian systems:
% cd utils
% ./install_server.sh
The script will ask you a few questions and will setup everything you need
to run Redis properly as a background daemon that will start again on
system reboots.
You'll be able to stop and start Redis using the script named
`/etc/init.d/redis_<portnumber>`, for instance `/etc/init.d/redis_6379`.
Code contributions
---
Note: by contributing code to the Redis project in any form, including sending
a pull request via Github, a code fragment or patch via private email or
public discussion groups, you agree to release your code under the terms
of the BSD license that you can find in the [COPYING][1] file included in the Redis
source distribution.
Please see the [CONTRIBUTING][2] file in this source distribution for more
information.
Enjoy!
[1]: https://github.com/antirez/redis/blob/unstable/COPYING
[2]: https://github.com/antirez/redis/blob/unstable/CONTRIBUTING

View File

@ -1,90 +0,0 @@
# Redis dependency Makefile
uname_S:= $(shell sh -c 'uname -s 2>/dev/null || echo not')
CCCOLOR="\033[34m"
LINKCOLOR="\033[34;1m"
SRCCOLOR="\033[33m"
BINCOLOR="\033[37;1m"
MAKECOLOR="\033[32;1m"
ENDCOLOR="\033[0m"
default:
@echo "Explicit target required"
.PHONY: default
# Prerequisites target
.make-prerequisites:
@touch $@
# Clean everything when CFLAGS is different
ifneq ($(shell sh -c '[ -f .make-cflags ] && cat .make-cflags || echo none'), $(CFLAGS))
.make-cflags: distclean
-(echo "$(CFLAGS)" > .make-cflags)
.make-prerequisites: .make-cflags
endif
# Clean everything when LDFLAGS is different
ifneq ($(shell sh -c '[ -f .make-ldflags ] && cat .make-ldflags || echo none'), $(LDFLAGS))
.make-ldflags: distclean
-(echo "$(LDFLAGS)" > .make-ldflags)
.make-prerequisites: .make-ldflags
endif
distclean:
-(cd hiredis && $(MAKE) clean) > /dev/null || true
-(cd linenoise && $(MAKE) clean) > /dev/null || true
-(cd lua && $(MAKE) clean) > /dev/null || true
-(cd geohash-int && $(MAKE) clean) > /dev/null || true
-(cd jemalloc && [ -f Makefile ] && $(MAKE) distclean) > /dev/null || true
-(rm -f .make-*)
.PHONY: distclean
hiredis: .make-prerequisites
@printf '%b %b\n' $(MAKECOLOR)MAKE$(ENDCOLOR) $(BINCOLOR)$@$(ENDCOLOR)
cd hiredis && $(MAKE) static
.PHONY: hiredis
linenoise: .make-prerequisites
@printf '%b %b\n' $(MAKECOLOR)MAKE$(ENDCOLOR) $(BINCOLOR)$@$(ENDCOLOR)
cd linenoise && $(MAKE)
.PHONY: linenoise
ifeq ($(uname_S),SunOS)
# Make isinf() available
LUA_CFLAGS= -D__C99FEATURES__=1
endif
LUA_CFLAGS+= -O2 -Wall -DLUA_ANSI -DENABLE_CJSON_GLOBAL -DREDIS_STATIC='' $(CFLAGS)
LUA_LDFLAGS+= $(LDFLAGS)
# lua's Makefile defines AR="ar rcu", which is unusual, and makes it more
# challenging to cross-compile lua (and redis). These defines make it easier
# to fit redis into cross-compilation environments, which typically set AR.
AR=ar
ARFLAGS=rcu
lua: .make-prerequisites
@printf '%b %b\n' $(MAKECOLOR)MAKE$(ENDCOLOR) $(BINCOLOR)$@$(ENDCOLOR)
cd lua/src && $(MAKE) all CFLAGS="$(LUA_CFLAGS)" MYLDFLAGS="$(LUA_LDFLAGS)" AR="$(AR) $(ARFLAGS)"
.PHONY: lua
JEMALLOC_CFLAGS= -std=gnu99 -Wall -pipe -g3 -O3 -funroll-loops $(CFLAGS)
JEMALLOC_LDFLAGS= $(LDFLAGS)
jemalloc: .make-prerequisites
@printf '%b %b\n' $(MAKECOLOR)MAKE$(ENDCOLOR) $(BINCOLOR)$@$(ENDCOLOR)
cd jemalloc && ./configure --with-lg-quantum=3 --with-jemalloc-prefix=je_ --enable-cc-silence CFLAGS="$(JEMALLOC_CFLAGS)" LDFLAGS="$(JEMALLOC_LDFLAGS)"
cd jemalloc && $(MAKE) CFLAGS="$(JEMALLOC_CFLAGS)" LDFLAGS="$(JEMALLOC_LDFLAGS)" lib/libjemalloc.a
.PHONY: jemalloc
geohash-int: .make-prerequisites
@printf '%b %b\n' $(MAKECOLOR)MAKE$(ENDCOLOR) $(BINCOLOR)$@$(ENDCOLOR)
cd geohash-int && $(MAKE)
.PHONY: geohash-int

View File

@ -1,66 +0,0 @@
This directory contains all Redis dependencies, except for the libc that
should be provided by the operating system.
* **Jemalloc** is our memory allocator, used as replacement for libc malloc on Linux by default. It has good performances and excellent fragmentation behavior. This component is upgraded from time to time.
* **geohash-int** is inside the dependencies directory but is actually part of the Redis project, since it is our private fork (heavily modified) of a library initially developed for Ardb, which is in turn a fork of Redis.
* **hiredis** is the official C client library for Redis. It is used by redis-cli, redis-benchmark and Redis Sentinel. It is part of the Redis official ecosystem but is developed externally from the Redis repository, so we just upgrade it as needed.
* **linenoise** is a readline replacement. It is developed by the same authors of Redis but is managed as a separated project and updated as needed.
* **lua** is Lua 5.1 with minor changes for security and additional libraries.
How to upgrade the above dependencies
===
Jemalloc
---
Jemalloc is unmodified. We only change settings via the `configure` script of Jemalloc using the `--with-lg-quantum` option, setting it to the value of 3 instead of 4. This provides us with more size classes that better suit the Redis data structures, in order to gain memory efficiency.
So in order to upgrade jemalloc:
1. Remove the jemalloc directory.
2. Substitute it with the new jemalloc source tree.
Geohash
---
This is never upgraded since it's part of the Redis project. If there are changes to merge from Ardb there is the need to manually check differences, but at this point the source code is pretty different.
Hiredis
---
Hiredis uses the SDS string library, that must be the same version used inside Redis itself. Hiredis is also very critical for Sentinel. Historically Redis often used forked versions of hiredis in a way or the other. In order to upgrade it is adviced to take a lot of care:
1. Check with diff if hiredis API changed and what impact it could have in Redis.
2. Make sure thet the SDS library inside Hiredis and inside Redis are compatible.
3. After the upgrade, run the Redis Sentinel test.
4. Check manually that redis-cli and redis-benchmark behave as expecteed, since we have no tests for CLI utilities currently.
Linenoise
---
Linenoise is rarely upgraded as needed. The upgrade process is trivial since
Redis uses a non modified version of linenoise, so to upgrade just do the
following:
1. Remove the linenoise directory.
2. Substitute it with the new linenoise source tree.
Lua
---
We use Lua 5.1 and no upgrade is planned currently, since we don't want to break
Lua scripts for new Lua features: in the context of Redis Lua scripts the
capabilities of 5.1 are usually more than enough, the release is rock solid,
and we definitely don't want to break old scripts.
So upgrading of Lua is up to the Redis project maintainers and should be a
manual procedure performed by taking a diff between the different versions.
Currently we have at least the following differences between official Lua 5.1
and our version:
1. Makefile is modified to allow a different compiler than GCC.
2. We have the implementation source code, and directly link to the following external libraries: `lua_cjson.o`, `lua_struct.o`, `lua_cmsgpack.o` and `lua_bit.o`.
3. There is a security fix in `ldo.c`, line 498: The check for `LUA_SIGNATURE[0]` is removed in order toa void direct bytecode exectuion.

View File

@ -1,23 +0,0 @@
STD=
WARN= -Wall
OPT= -O2
R_CFLAGS= $(STD) $(WARN) $(OPT) $(DEBUG) $(CFLAGS)
R_LDFLAGS= $(LDFLAGS)
DEBUG= -g
R_CC=$(CC) $(R_CFLAGS)
R_LD=$(CC) $(R_LDFLAGS)
all: geohash.o geohash_helper.o
.PHONY: all
geohash.o: geohash.h geohash.c
geohash_helper.o: geohash.h geohash_helper.h geohash_helper.c
.c.o:
$(R_CC) -c $<
clean:
rm -f *.o

View File

@ -1,295 +0,0 @@
/*
* Copyright (c) 2013-2014, yinqiwen <yinqiwen@gmail.com>
* Copyright (c) 2014, Matt Stancliff <matt@genges.com>.
* Copyright (c) 2015-2016, Salvatore Sanfilippo <antirez@gmail.com>.
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions are met:
*
* * Redistributions of source code must retain the above copyright notice,
* this list of conditions and the following disclaimer.
* * Redistributions in binary form must reproduce the above copyright
* notice, this list of conditions and the following disclaimer in the
* documentation and/or other materials provided with the distribution.
* * Neither the name of Redis nor the names of its contributors may be used
* to endorse or promote products derived from this software without
* specific prior written permission.
*
* THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
* AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
* IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
* ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS
* BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
* CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
* SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
* INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
* CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
* ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF
* THE POSSIBILITY OF SUCH DAMAGE.
*/
#include "geohash.h"
/**
* Hashing works like this:
* Divide the world into 4 buckets. Label each one as such:
* -----------------
* | | |
* | | |
* | 0,1 | 1,1 |
* -----------------
* | | |
* | | |
* | 0,0 | 1,0 |
* -----------------
*/
/* Interleave lower bits of x and y, so the bits of x
* are in the even positions and bits from y in the odd;
* x and y must initially be less than 2**32 (65536).
* From: https://graphics.stanford.edu/~seander/bithacks.html#InterleaveBMN
*/
static inline uint64_t interleave64(uint32_t xlo, uint32_t ylo) {
static const uint64_t B[] = {0x5555555555555555ULL, 0x3333333333333333ULL,
0x0F0F0F0F0F0F0F0FULL, 0x00FF00FF00FF00FFULL,
0x0000FFFF0000FFFFULL};
static const unsigned int S[] = {1, 2, 4, 8, 16};
uint64_t x = xlo;
uint64_t y = ylo;
x = (x | (x << S[4])) & B[4];
y = (y | (y << S[4])) & B[4];
x = (x | (x << S[3])) & B[3];
y = (y | (y << S[3])) & B[3];
x = (x | (x << S[2])) & B[2];
y = (y | (y << S[2])) & B[2];
x = (x | (x << S[1])) & B[1];
y = (y | (y << S[1])) & B[1];
x = (x | (x << S[0])) & B[0];
y = (y | (y << S[0])) & B[0];
return x | (y << 1);
}
/* reverse the interleave process
* derived from http://stackoverflow.com/questions/4909263
*/
static inline uint64_t deinterleave64(uint64_t interleaved) {
static const uint64_t B[] = {0x5555555555555555ULL, 0x3333333333333333ULL,
0x0F0F0F0F0F0F0F0FULL, 0x00FF00FF00FF00FFULL,
0x0000FFFF0000FFFFULL, 0x00000000FFFFFFFFULL};
static const unsigned int S[] = {0, 1, 2, 4, 8, 16};
uint64_t x = interleaved;
uint64_t y = interleaved >> 1;
x = (x | (x >> S[0])) & B[0];
y = (y | (y >> S[0])) & B[0];
x = (x | (x >> S[1])) & B[1];
y = (y | (y >> S[1])) & B[1];
x = (x | (x >> S[2])) & B[2];
y = (y | (y >> S[2])) & B[2];
x = (x | (x >> S[3])) & B[3];
y = (y | (y >> S[3])) & B[3];
x = (x | (x >> S[4])) & B[4];
y = (y | (y >> S[4])) & B[4];
x = (x | (x >> S[5])) & B[5];
y = (y | (y >> S[5])) & B[5];
return x | (y << 32);
}
void geohashGetCoordRange(GeoHashRange *long_range, GeoHashRange *lat_range) {
/* These are constraints from EPSG:900913 / EPSG:3785 / OSGEO:41001 */
/* We can't geocode at the north/south pole. */
long_range->max = GEO_LONG_MAX;
long_range->min = GEO_LONG_MIN;
lat_range->max = GEO_LAT_MAX;
lat_range->min = GEO_LAT_MIN;
}
int geohashEncode(const GeoHashRange *long_range, const GeoHashRange *lat_range,
double longitude, double latitude, uint8_t step,
GeoHashBits *hash) {
/* Check basic arguments sanity. */
if (hash == NULL || step > 32 || step == 0 ||
RANGEPISZERO(lat_range) || RANGEPISZERO(long_range)) return 0;
/* Return an error when trying to index outside the supported
* constraints. */
if (longitude > 180 || longitude < -180 ||
latitude > 85.05112878 || latitude < -85.05112878) return 0;
hash->bits = 0;
hash->step = step;
if (latitude < lat_range->min || latitude > lat_range->max ||
longitude < long_range->min || longitude > long_range->max) {
return 0;
}
double lat_offset =
(latitude - lat_range->min) / (lat_range->max - lat_range->min);
double long_offset =
(longitude - long_range->min) / (long_range->max - long_range->min);
/* convert to fixed point based on the step size */
lat_offset *= (1 << step);
long_offset *= (1 << step);
hash->bits = interleave64(lat_offset, long_offset);
return 1;
}
int geohashEncodeType(double longitude, double latitude, uint8_t step, GeoHashBits *hash) {
GeoHashRange r[2] = { { 0 } };
geohashGetCoordRange(&r[0], &r[1]);
return geohashEncode(&r[0], &r[1], longitude, latitude, step, hash);
}
int geohashEncodeWGS84(double longitude, double latitude, uint8_t step,
GeoHashBits *hash) {
return geohashEncodeType(longitude, latitude, step, hash);
}
int geohashDecode(const GeoHashRange long_range, const GeoHashRange lat_range,
const GeoHashBits hash, GeoHashArea *area) {
if (HASHISZERO(hash) || NULL == area || RANGEISZERO(lat_range) ||
RANGEISZERO(long_range)) {
return 0;
}
area->hash = hash;
uint8_t step = hash.step;
uint64_t hash_sep = deinterleave64(hash.bits); /* hash = [LAT][LONG] */
double lat_scale = lat_range.max - lat_range.min;
double long_scale = long_range.max - long_range.min;
uint32_t ilato = hash_sep; /* get lat part of deinterleaved hash */
uint32_t ilono = hash_sep >> 32; /* shift over to get long part of hash */
/* divide by 2**step.
* Then, for 0-1 coordinate, multiply times scale and add
to the min to get the absolute coordinate. */
area->latitude.min =
lat_range.min + (ilato * 1.0 / (1ull << step)) * lat_scale;
area->latitude.max =
lat_range.min + ((ilato + 1) * 1.0 / (1ull << step)) * lat_scale;
area->longitude.min =
long_range.min + (ilono * 1.0 / (1ull << step)) * long_scale;
area->longitude.max =
long_range.min + ((ilono + 1) * 1.0 / (1ull << step)) * long_scale;
return 1;
}
int geohashDecodeType(const GeoHashBits hash, GeoHashArea *area) {
GeoHashRange r[2] = { { 0 } };
geohashGetCoordRange(&r[0], &r[1]);
return geohashDecode(r[0], r[1], hash, area);
}
int geohashDecodeWGS84(const GeoHashBits hash, GeoHashArea *area) {
return geohashDecodeType(hash, area);
}
int geohashDecodeAreaToLongLat(const GeoHashArea *area, double *xy) {
if (!xy) return 0;
xy[0] = (area->longitude.min + area->longitude.max) / 2;
xy[1] = (area->latitude.min + area->latitude.max) / 2;
return 1;
}
int geohashDecodeToLongLatType(const GeoHashBits hash, double *xy) {
GeoHashArea area = { { 0 } };
if (!xy || !geohashDecodeType(hash, &area))
return 0;
return geohashDecodeAreaToLongLat(&area, xy);
}
int geohashDecodeToLongLatWGS84(const GeoHashBits hash, double *xy) {
return geohashDecodeToLongLatType(hash, xy);
}
static void geohash_move_x(GeoHashBits *hash, int8_t d) {
if (d == 0)
return;
uint64_t x = hash->bits & 0xaaaaaaaaaaaaaaaaULL;
uint64_t y = hash->bits & 0x5555555555555555ULL;
uint64_t zz = 0x5555555555555555ULL >> (64 - hash->step * 2);
if (d > 0) {
x = x + (zz + 1);
} else {
x = x | zz;
x = x - (zz + 1);
}
x &= (0xaaaaaaaaaaaaaaaaULL >> (64 - hash->step * 2));
hash->bits = (x | y);
}
static void geohash_move_y(GeoHashBits *hash, int8_t d) {
if (d == 0)
return;
uint64_t x = hash->bits & 0xaaaaaaaaaaaaaaaaULL;
uint64_t y = hash->bits & 0x5555555555555555ULL;
uint64_t zz = 0xaaaaaaaaaaaaaaaaULL >> (64 - hash->step * 2);
if (d > 0) {
y = y + (zz + 1);
} else {
y = y | zz;
y = y - (zz + 1);
}
y &= (0x5555555555555555ULL >> (64 - hash->step * 2));
hash->bits = (x | y);
}
void geohashNeighbors(const GeoHashBits *hash, GeoHashNeighbors *neighbors) {
neighbors->east = *hash;
neighbors->west = *hash;
neighbors->north = *hash;
neighbors->south = *hash;
neighbors->south_east = *hash;
neighbors->south_west = *hash;
neighbors->north_east = *hash;
neighbors->north_west = *hash;
geohash_move_x(&neighbors->east, 1);
geohash_move_y(&neighbors->east, 0);
geohash_move_x(&neighbors->west, -1);
geohash_move_y(&neighbors->west, 0);
geohash_move_x(&neighbors->south, 0);
geohash_move_y(&neighbors->south, -1);
geohash_move_x(&neighbors->north, 0);
geohash_move_y(&neighbors->north, 1);
geohash_move_x(&neighbors->north_west, -1);
geohash_move_y(&neighbors->north_west, 1);
geohash_move_x(&neighbors->north_east, 1);
geohash_move_y(&neighbors->north_east, 1);
geohash_move_x(&neighbors->south_east, 1);
geohash_move_y(&neighbors->south_east, -1);
geohash_move_x(&neighbors->south_west, -1);
geohash_move_y(&neighbors->south_west, -1);
}

View File

@ -1,118 +0,0 @@
/*
* Copyright (c) 2013-2014, yinqiwen <yinqiwen@gmail.com>
* Copyright (c) 2014, Matt Stancliff <matt@genges.com>.
* Copyright (c) 2015, Salvatore Sanfilippo <antirez@gmail.com>.
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions are met:
*
* * Redistributions of source code must retain the above copyright notice,
* this list of conditions and the following disclaimer.
* * Redistributions in binary form must reproduce the above copyright
* notice, this list of conditions and the following disclaimer in the
* documentation and/or other materials provided with the distribution.
* * Neither the name of Redis nor the names of its contributors may be used
* to endorse or promote products derived from this software without
* specific prior written permission.
*
* THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
* AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
* IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
* ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS
* BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
* CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
* SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
* INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
* CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
* ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF
* THE POSSIBILITY OF SUCH DAMAGE.
*/
#ifndef GEOHASH_H_
#define GEOHASH_H_
#include <stddef.h>
#include <stdint.h>
#include <stdint.h>
#if defined(__cplusplus)
extern "C" {
#endif
#define HASHISZERO(r) (!(r).bits && !(r).step)
#define RANGEISZERO(r) (!(r).max && !(r).min)
#define RANGEPISZERO(r) (r == NULL || RANGEISZERO(*r))
#define GEO_STEP_MAX 26 /* 26*2 = 52 bits. */
/* Limits from EPSG:900913 / EPSG:3785 / OSGEO:41001 */
#define GEO_LAT_MIN -85.05112878
#define GEO_LAT_MAX 85.05112878
#define GEO_LONG_MIN -180
#define GEO_LONG_MAX 180
typedef enum {
GEOHASH_NORTH = 0,
GEOHASH_EAST,
GEOHASH_WEST,
GEOHASH_SOUTH,
GEOHASH_SOUTH_WEST,
GEOHASH_SOUTH_EAST,
GEOHASH_NORT_WEST,
GEOHASH_NORT_EAST
} GeoDirection;
typedef struct {
uint64_t bits;
uint8_t step;
} GeoHashBits;
typedef struct {
double min;
double max;
} GeoHashRange;
typedef struct {
GeoHashBits hash;
GeoHashRange longitude;
GeoHashRange latitude;
} GeoHashArea;
typedef struct {
GeoHashBits north;
GeoHashBits east;
GeoHashBits west;
GeoHashBits south;
GeoHashBits north_east;
GeoHashBits south_east;
GeoHashBits north_west;
GeoHashBits south_west;
} GeoHashNeighbors;
/*
* 0:success
* -1:failed
*/
void geohashGetCoordRange(GeoHashRange *long_range, GeoHashRange *lat_range);
int geohashEncode(const GeoHashRange *long_range, const GeoHashRange *lat_range,
double longitude, double latitude, uint8_t step,
GeoHashBits *hash);
int geohashEncodeType(double longitude, double latitude,
uint8_t step, GeoHashBits *hash);
int geohashEncodeWGS84(double longitude, double latitude, uint8_t step,
GeoHashBits *hash);
int geohashDecode(const GeoHashRange long_range, const GeoHashRange lat_range,
const GeoHashBits hash, GeoHashArea *area);
int geohashDecodeType(const GeoHashBits hash, GeoHashArea *area);
int geohashDecodeWGS84(const GeoHashBits hash, GeoHashArea *area);
int geohashDecodeAreaToLongLat(const GeoHashArea *area, double *xy);
int geohashDecodeToLongLatType(const GeoHashBits hash, double *xy);
int geohashDecodeToLongLatWGS84(const GeoHashBits hash, double *xy);
int geohashDecodeToLongLatMercator(const GeoHashBits hash, double *xy);
void geohashNeighbors(const GeoHashBits *hash, GeoHashNeighbors *neighbors);
#if defined(__cplusplus)
}
#endif
#endif /* GEOHASH_H_ */

View File

@ -1,217 +0,0 @@
/*
* Copyright (c) 2013-2014, yinqiwen <yinqiwen@gmail.com>
* Copyright (c) 2014, Matt Stancliff <matt@genges.com>.
* Copyright (c) 2015-2016, Salvatore Sanfilippo <antirez@gmail.com>.
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions are met:
*
* * Redistributions of source code must retain the above copyright notice,
* this list of conditions and the following disclaimer.
* * Redistributions in binary form must reproduce the above copyright
* notice, this list of conditions and the following disclaimer in the
* documentation and/or other materials provided with the distribution.
* * Neither the name of Redis nor the names of its contributors may be used
* to endorse or promote products derived from this software without
* specific prior written permission.
*
* THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
* AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
* IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
* ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS
* BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
* CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
* SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
* INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
* CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
* ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF
* THE POSSIBILITY OF SUCH DAMAGE.
*/
/* This is a C++ to C conversion from the ardb project.
* This file started out as:
* https://github.com/yinqiwen/ardb/blob/d42503/src/geo/geohash_helper.cpp
*/
#include "geohash_helper.h"
#include <math.h>
#define D_R (M_PI / 180.0)
#define R_MAJOR 6378137.0
#define R_MINOR 6356752.3142
#define RATIO (R_MINOR / R_MAJOR)
#define ECCENT (sqrt(1.0 - (RATIO *RATIO)))
#define COM (0.5 * ECCENT)
/// @brief The usual PI/180 constant
const double DEG_TO_RAD = 0.017453292519943295769236907684886;
/// @brief Earth's quatratic mean radius for WGS-84
const double EARTH_RADIUS_IN_METERS = 6372797.560856;
const double MERCATOR_MAX = 20037726.37;
const double MERCATOR_MIN = -20037726.37;
static inline double deg_rad(double ang) { return ang * D_R; }
static inline double rad_deg(double ang) { return ang / D_R; }
/* This function is used in order to estimate the step (bits precision)
* of the 9 search area boxes during radius queries. */
uint8_t geohashEstimateStepsByRadius(double range_meters, double lat) {
if (range_meters == 0) return 26;
int step = 1;
while (range_meters < MERCATOR_MAX) {
range_meters *= 2;
step++;
}
step -= 2; /* Make sure range is included in most of the base cases. */
/* Wider range torwards the poles... Note: it is possible to do better
* than this approximation by computing the distance between meridians
* at this latitude, but this does the trick for now. */
if (lat > 66 || lat < -66) {
step--;
if (lat > 80 || lat < -80) step--;
}
/* Frame to valid range. */
if (step < 1) step = 1;
if (step > 26) step = 26;
return step;
}
/* Return the bounding box of the search area centered at latitude,longitude
* having a radius of radius_meter. bounds[0] - bounds[2] is the minimum
* and maxium longitude, while bounds[1] - bounds[3] is the minimum and
* maximum latitude. */
int geohashBoundingBox(double longitude, double latitude, double radius_meters,
double *bounds) {
if (!bounds) return 0;
bounds[0] = longitude - rad_deg(radius_meters/EARTH_RADIUS_IN_METERS/cos(deg_rad(latitude)));
bounds[2] = longitude + rad_deg(radius_meters/EARTH_RADIUS_IN_METERS/cos(deg_rad(latitude)));
bounds[1] = latitude - rad_deg(radius_meters/EARTH_RADIUS_IN_METERS);
bounds[3] = latitude + rad_deg(radius_meters/EARTH_RADIUS_IN_METERS);
return 1;
}
/* Return a set of areas (center + 8) that are able to cover a range query
* for the specified position and radius. */
GeoHashRadius geohashGetAreasByRadius(double longitude, double latitude, double radius_meters) {
GeoHashRange long_range, lat_range;
GeoHashRadius radius;
GeoHashBits hash;
GeoHashNeighbors neighbors;
GeoHashArea area;
double min_lon, max_lon, min_lat, max_lat;
double bounds[4];
int steps;
geohashBoundingBox(longitude, latitude, radius_meters, bounds);
min_lon = bounds[0];
min_lat = bounds[1];
max_lon = bounds[2];
max_lat = bounds[3];
steps = geohashEstimateStepsByRadius(radius_meters,latitude);
geohashGetCoordRange(&long_range,&lat_range);
geohashEncode(&long_range,&lat_range,longitude,latitude,steps,&hash);
geohashNeighbors(&hash,&neighbors);
geohashDecode(long_range,lat_range,hash,&area);
/* Check if the step is enough at the limits of the covered area.
* Sometimes when the search area is near an edge of the
* area, the estimated step is not small enough, since one of the
* north / south / west / east square is too near to the search area
* to cover everything. */
int decrease_step = 0;
{
GeoHashArea north, south, east, west;
geohashDecode(long_range, lat_range, neighbors.north, &north);
geohashDecode(long_range, lat_range, neighbors.south, &south);
geohashDecode(long_range, lat_range, neighbors.east, &east);
geohashDecode(long_range, lat_range, neighbors.west, &west);
if (geohashGetDistance(longitude,latitude,longitude,north.latitude.max)
< radius_meters) decrease_step = 1;
if (geohashGetDistance(longitude,latitude,longitude,south.latitude.min)
< radius_meters) decrease_step = 1;
if (geohashGetDistance(longitude,latitude,east.longitude.max,latitude)
< radius_meters) decrease_step = 1;
if (geohashGetDistance(longitude,latitude,west.longitude.min,latitude)
< radius_meters) decrease_step = 1;
}
if (steps > 1 && decrease_step) {
steps--;
geohashEncode(&long_range,&lat_range,longitude,latitude,steps,&hash);
geohashNeighbors(&hash,&neighbors);
geohashDecode(long_range,lat_range,hash,&area);
}
/* Exclude the search areas that are useless. */
if (area.latitude.min < min_lat) {
GZERO(neighbors.south);
GZERO(neighbors.south_west);
GZERO(neighbors.south_east);
}
if (area.latitude.max > max_lat) {
GZERO(neighbors.north);
GZERO(neighbors.north_east);
GZERO(neighbors.north_west);
}
if (area.longitude.min < min_lon) {
GZERO(neighbors.west);
GZERO(neighbors.south_west);
GZERO(neighbors.north_west);
}
if (area.longitude.max > max_lon) {
GZERO(neighbors.east);
GZERO(neighbors.south_east);
GZERO(neighbors.north_east);
}
radius.hash = hash;
radius.neighbors = neighbors;
radius.area = area;
return radius;
}
GeoHashRadius geohashGetAreasByRadiusWGS84(double longitude, double latitude,
double radius_meters) {
return geohashGetAreasByRadius(longitude, latitude, radius_meters);
}
GeoHashFix52Bits geohashAlign52Bits(const GeoHashBits hash) {
uint64_t bits = hash.bits;
bits <<= (52 - hash.step * 2);
return bits;
}
/* Calculate distance using haversin great circle distance formula. */
double geohashGetDistance(double lon1d, double lat1d, double lon2d, double lat2d) {
double lat1r, lon1r, lat2r, lon2r, u, v;
lat1r = deg_rad(lat1d);
lon1r = deg_rad(lon1d);
lat2r = deg_rad(lat2d);
lon2r = deg_rad(lon2d);
u = sin((lat2r - lat1r) / 2);
v = sin((lon2r - lon1r) / 2);
return 2.0 * EARTH_RADIUS_IN_METERS *
asin(sqrt(u * u + cos(lat1r) * cos(lat2r) * v * v));
}
int geohashGetDistanceIfInRadius(double x1, double y1,
double x2, double y2, double radius,
double *distance) {
*distance = geohashGetDistance(x1, y1, x2, y2);
if (*distance > radius) return 0;
return 1;
}
int geohashGetDistanceIfInRadiusWGS84(double x1, double y1, double x2,
double y2, double radius,
double *distance) {
return geohashGetDistanceIfInRadius(x1, y1, x2, y2, radius, distance);
}

View File

@ -1,71 +0,0 @@
/*
* Copyright (c) 2013-2014, yinqiwen <yinqiwen@gmail.com>
* Copyright (c) 2014, Matt Stancliff <matt@genges.com>.
* Copyright (c) 2015, Salvatore Sanfilippo <antirez@gmail.com>.
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions are met:
*
* * Redistributions of source code must retain the above copyright notice,
* this list of conditions and the following disclaimer.
* * Redistributions in binary form must reproduce the above copyright
* notice, this list of conditions and the following disclaimer in the
* documentation and/or other materials provided with the distribution.
* * Neither the name of Redis nor the names of its contributors may be used
* to endorse or promote products derived from this software without
* specific prior written permission.
*
* THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
* AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
* IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
* ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS
* BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
* CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
* SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
* INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
* CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
* ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF
* THE POSSIBILITY OF SUCH DAMAGE.
*/
#ifndef GEOHASH_HELPER_HPP_
#define GEOHASH_HELPER_HPP_
#include <math.h>
#include "geohash.h"
#define GZERO(s) s.bits = s.step = 0;
#define GISZERO(s) (!s.bits && !s.step)
#define GISNOTZERO(s) (s.bits || s.step)
typedef uint64_t GeoHashFix52Bits;
typedef uint64_t GeoHashVarBits;
typedef struct {
GeoHashBits hash;
GeoHashArea area;
GeoHashNeighbors neighbors;
} GeoHashRadius;
int GeoHashBitsComparator(const GeoHashBits *a, const GeoHashBits *b);
uint8_t geohashEstimateStepsByRadius(double range_meters, double lat);
int geohashBoundingBox(double longitude, double latitude, double radius_meters,
double *bounds);
GeoHashRadius geohashGetAreasByRadius(double longitude,
double latitude, double radius_meters);
GeoHashRadius geohashGetAreasByRadiusWGS84(double longitude, double latitude,
double radius_meters);
GeoHashRadius geohashGetAreasByRadiusMercator(double longitude, double latitude,
double radius_meters);
GeoHashFix52Bits geohashAlign52Bits(const GeoHashBits hash);
double geohashGetDistance(double lon1d, double lat1d,
double lon2d, double lat2d);
int geohashGetDistanceIfInRadius(double x1, double y1,
double x2, double y2, double radius,
double *distance);
int geohashGetDistanceIfInRadiusWGS84(double x1, double y1, double x2,
double y2, double radius,
double *distance);
#endif /* GEOHASH_HELPER_HPP_ */

View File

@ -1,6 +0,0 @@
/hiredis-test
/examples/hiredis-example*
/*.o
/*.so
/*.dylib
/*.a

View File

@ -1,6 +0,0 @@
language: c
compiler:
- gcc
- clang
script: make && make check

View File

@ -1,24 +0,0 @@
### 0.11.0
* Increase the maximum multi-bulk reply depth to 7.
* Increase the read buffer size from 2k to 16k.
* Use poll(2) instead of select(2) to support large fds (>= 1024).
### 0.10.1
* Makefile overhaul. Important to check out if you override one or more
variables using environment variables or via arguments to the "make" tool.
* Issue #45: Fix potential memory leak for a multi bulk reply with 0 elements
being created by the default reply object functions.
* Issue #43: Don't crash in an asynchronous context when Redis returns an error
reply after the connection has been made (this happens when the maximum
number of connections is reached).
### 0.10.0
* See commit log.

View File

@ -1,29 +0,0 @@
Copyright (c) 2009-2011, Salvatore Sanfilippo <antirez at gmail dot com>
Copyright (c) 2010-2011, Pieter Noordhuis <pcnoordhuis at gmail dot com>
All rights reserved.
Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions are met:
* Redistributions of source code must retain the above copyright notice,
this list of conditions and the following disclaimer.
* Redistributions in binary form must reproduce the above copyright notice,
this list of conditions and the following disclaimer in the documentation
and/or other materials provided with the distribution.
* Neither the name of Redis nor the names of its contributors may be used
to endorse or promote products derived from this software without specific
prior written permission.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND
ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR
ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
(INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON
ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.

View File

@ -1,166 +0,0 @@
# Hiredis Makefile
# Copyright (C) 2010-2011 Salvatore Sanfilippo <antirez at gmail dot com>
# Copyright (C) 2010-2011 Pieter Noordhuis <pcnoordhuis at gmail dot com>
# This file is released under the BSD license, see the COPYING file
OBJ=net.o hiredis.o sds.o async.o
EXAMPLES=hiredis-example hiredis-example-libevent hiredis-example-libev
TESTS=hiredis-test
LIBNAME=libhiredis
HIREDIS_MAJOR=0
HIREDIS_MINOR=11
# redis-server configuration used for testing
REDIS_PORT=56379
REDIS_SERVER=redis-server
define REDIS_TEST_CONFIG
daemonize yes
pidfile /tmp/hiredis-test-redis.pid
port $(REDIS_PORT)
bind 127.0.0.1
unixsocket /tmp/hiredis-test-redis.sock
endef
export REDIS_TEST_CONFIG
# Fallback to gcc when $CC is not in $PATH.
CC:=$(shell sh -c 'type $(CC) >/dev/null 2>/dev/null && echo $(CC) || echo gcc')
OPTIMIZATION?=-O3
WARNINGS=-Wall -W -Wstrict-prototypes -Wwrite-strings
DEBUG?= -g -ggdb
REAL_CFLAGS=$(OPTIMIZATION) -fPIC $(CFLAGS) $(WARNINGS) $(DEBUG) $(ARCH)
REAL_LDFLAGS=$(LDFLAGS) $(ARCH)
DYLIBSUFFIX=so
STLIBSUFFIX=a
DYLIB_MINOR_NAME=$(LIBNAME).$(DYLIBSUFFIX).$(HIREDIS_MAJOR).$(HIREDIS_MINOR)
DYLIB_MAJOR_NAME=$(LIBNAME).$(DYLIBSUFFIX).$(HIREDIS_MAJOR)
DYLIBNAME=$(LIBNAME).$(DYLIBSUFFIX)
DYLIB_MAKE_CMD=$(CC) -shared -Wl,-soname,$(DYLIB_MINOR_NAME) -o $(DYLIBNAME) $(LDFLAGS)
STLIBNAME=$(LIBNAME).$(STLIBSUFFIX)
STLIB_MAKE_CMD=ar rcs $(STLIBNAME)
# Platform-specific overrides
uname_S := $(shell sh -c 'uname -s 2>/dev/null || echo not')
ifeq ($(uname_S),SunOS)
REAL_LDFLAGS+= -ldl -lnsl -lsocket
DYLIB_MAKE_CMD=$(CC) -G -o $(DYLIBNAME) -h $(DYLIB_MINOR_NAME) $(LDFLAGS)
INSTALL= cp -r
endif
ifeq ($(uname_S),Darwin)
DYLIBSUFFIX=dylib
DYLIB_MINOR_NAME=$(LIBNAME).$(HIREDIS_MAJOR).$(HIREDIS_MINOR).$(DYLIBSUFFIX)
DYLIB_MAJOR_NAME=$(LIBNAME).$(HIREDIS_MAJOR).$(DYLIBSUFFIX)
DYLIB_MAKE_CMD=$(CC) -shared -Wl,-install_name,$(DYLIB_MINOR_NAME) -o $(DYLIBNAME) $(LDFLAGS)
endif
all: $(DYLIBNAME)
# Deps (use make dep to generate this)
net.o: net.c fmacros.h net.h hiredis.h
async.o: async.c async.h hiredis.h sds.h dict.c dict.h
hiredis.o: hiredis.c fmacros.h hiredis.h net.h sds.h
sds.o: sds.c sds.h
test.o: test.c hiredis.h
$(DYLIBNAME): $(OBJ)
$(DYLIB_MAKE_CMD) $(OBJ)
$(STLIBNAME): $(OBJ)
$(STLIB_MAKE_CMD) $(OBJ)
dynamic: $(DYLIBNAME)
static: $(STLIBNAME)
# Binaries:
hiredis-example-libevent: examples/example-libevent.c adapters/libevent.h $(STLIBNAME)
$(CC) -o examples/$@ $(REAL_CFLAGS) $(REAL_LDFLAGS) -I. $< -levent $(STLIBNAME)
hiredis-example-libev: examples/example-libev.c adapters/libev.h $(STLIBNAME)
$(CC) -o examples/$@ $(REAL_CFLAGS) $(REAL_LDFLAGS) -I. $< -lev $(STLIBNAME)
ifndef AE_DIR
hiredis-example-ae:
@echo "Please specify AE_DIR (e.g. <redis repository>/src)"
@false
else
hiredis-example-ae: examples/example-ae.c adapters/ae.h $(STLIBNAME)
$(CC) -o examples/$@ $(REAL_CFLAGS) $(REAL_LDFLAGS) -I. -I$(AE_DIR) $< $(AE_DIR)/ae.o $(AE_DIR)/zmalloc.o $(AE_DIR)/../deps/jemalloc/lib/libjemalloc.a -pthread $(STLIBNAME)
endif
ifndef LIBUV_DIR
hiredis-example-libuv:
@echo "Please specify LIBUV_DIR (e.g. ../libuv/)"
@false
else
hiredis-example-libuv: examples/example-libuv.c adapters/libuv.h $(STLIBNAME)
$(CC) -o examples/$@ $(REAL_CFLAGS) $(REAL_LDFLAGS) -I. -I$(LIBUV_DIR)/include $< $(LIBUV_DIR)/.libs/libuv.a -lpthread $(STLIBNAME)
endif
hiredis-example: examples/example.c $(STLIBNAME)
$(CC) -o examples/$@ $(REAL_CFLAGS) $(REAL_LDFLAGS) -I. $< $(STLIBNAME)
examples: $(EXAMPLES)
hiredis-test: test.o $(STLIBNAME)
$(CC) -o $@ $(REAL_LDFLAGS) $< $(STLIBNAME)
test: hiredis-test
./hiredis-test
check: hiredis-test
@echo "$$REDIS_TEST_CONFIG" | $(REDIS_SERVER) -
./hiredis-test -h 127.0.0.1 -p $(REDIS_PORT) -s /tmp/hiredis-test-redis.sock || \
( kill `cat /tmp/hiredis-test-redis.pid` && false )
kill `cat /tmp/hiredis-test-redis.pid`
.c.o:
$(CC) -std=c99 -pedantic -c $(REAL_CFLAGS) $<
clean:
rm -rf $(DYLIBNAME) $(STLIBNAME) $(TESTS) examples/hiredis-example* *.o *.gcda *.gcno *.gcov
dep:
$(CC) -MM *.c
# Installation related variables and target
PREFIX?=/usr/local
INSTALL_INCLUDE_PATH= $(PREFIX)/include/hiredis
INSTALL_LIBRARY_PATH= $(PREFIX)/lib
ifeq ($(uname_S),SunOS)
INSTALL?= cp -r
endif
INSTALL?= cp -a
install: $(DYLIBNAME) $(STLIBNAME)
mkdir -p $(INSTALL_INCLUDE_PATH) $(INSTALL_LIBRARY_PATH)
$(INSTALL) hiredis.h async.h adapters $(INSTALL_INCLUDE_PATH)
$(INSTALL) $(DYLIBNAME) $(INSTALL_LIBRARY_PATH)/$(DYLIB_MINOR_NAME)
cd $(INSTALL_LIBRARY_PATH) && ln -sf $(DYLIB_MINOR_NAME) $(DYLIB_MAJOR_NAME)
cd $(INSTALL_LIBRARY_PATH) && ln -sf $(DYLIB_MAJOR_NAME) $(DYLIBNAME)
$(INSTALL) $(STLIBNAME) $(INSTALL_LIBRARY_PATH)
32bit:
@echo ""
@echo "WARNING: if this fails under Linux you probably need to install libc6-dev-i386"
@echo ""
$(MAKE) CFLAGS="-m32" LDFLAGS="-m32"
gprof:
$(MAKE) CFLAGS="-pg" LDFLAGS="-pg"
gcov:
$(MAKE) CFLAGS="-fprofile-arcs -ftest-coverage" LDFLAGS="-fprofile-arcs"
coverage: gcov
make check
mkdir -p tmp/lcov
lcov -d . -c -o tmp/lcov/hiredis.info
genhtml --legend -o tmp/lcov/report tmp/lcov/hiredis.info
noopt:
$(MAKE) OPTIMIZATION=""
.PHONY: all test check clean dep install 32bit gprof gcov noopt

View File

@ -1,384 +0,0 @@
[![Build Status](https://travis-ci.org/redis/hiredis.png)](https://travis-ci.org/redis/hiredis)
# HIREDIS
Hiredis is a minimalistic C client library for the [Redis](http://redis.io/) database.
It is minimalistic because it just adds minimal support for the protocol, but
at the same time it uses an high level printf-alike API in order to make it
much higher level than otherwise suggested by its minimal code base and the
lack of explicit bindings for every Redis command.
Apart from supporting sending commands and receiving replies, it comes with
a reply parser that is decoupled from the I/O layer. It
is a stream parser designed for easy reusability, which can for instance be used
in higher level language bindings for efficient reply parsing.
Hiredis only supports the binary-safe Redis protocol, so you can use it with any
Redis version >= 1.2.0.
The library comes with multiple APIs. There is the
*synchronous API*, the *asynchronous API* and the *reply parsing API*.
## UPGRADING
Version 0.9.0 is a major overhaul of hiredis in every aspect. However, upgrading existing
code using hiredis should not be a big pain. The key thing to keep in mind when
upgrading is that hiredis >= 0.9.0 uses a `redisContext*` to keep state, in contrast to
the stateless 0.0.1 that only has a file descriptor to work with.
## Synchronous API
To consume the synchronous API, there are only a few function calls that need to be introduced:
redisContext *redisConnect(const char *ip, int port);
void *redisCommand(redisContext *c, const char *format, ...);
void freeReplyObject(void *reply);
### Connecting
The function `redisConnect` is used to create a so-called `redisContext`. The
context is where Hiredis holds state for a connection. The `redisContext`
struct has an integer `err` field that is non-zero when an the connection is in
an error state. The field `errstr` will contain a string with a description of
the error. More information on errors can be found in the **Errors** section.
After trying to connect to Redis using `redisConnect` you should
check the `err` field to see if establishing the connection was successful:
redisContext *c = redisConnect("127.0.0.1", 6379);
if (c != NULL && c->err) {
printf("Error: %s\n", c->errstr);
// handle error
}
### Sending commands
There are several ways to issue commands to Redis. The first that will be introduced is
`redisCommand`. This function takes a format similar to printf. In the simplest form,
it is used like this:
reply = redisCommand(context, "SET foo bar");
The specifier `%s` interpolates a string in the command, and uses `strlen` to
determine the length of the string:
reply = redisCommand(context, "SET foo %s", value);
When you need to pass binary safe strings in a command, the `%b` specifier can be
used. Together with a pointer to the string, it requires a `size_t` length argument
of the string:
reply = redisCommand(context, "SET foo %b", value, (size_t) valuelen);
Internally, Hiredis splits the command in different arguments and will
convert it to the protocol used to communicate with Redis.
One or more spaces separates arguments, so you can use the specifiers
anywhere in an argument:
reply = redisCommand(context, "SET key:%s %s", myid, value);
### Using replies
The return value of `redisCommand` holds a reply when the command was
successfully executed. When an error occurs, the return value is `NULL` and
the `err` field in the context will be set (see section on **Errors**).
Once an error is returned the context cannot be reused and you should set up
a new connection.
The standard replies that `redisCommand` are of the type `redisReply`. The
`type` field in the `redisReply` should be used to test what kind of reply
was received:
* **`REDIS_REPLY_STATUS`**:
* The command replied with a status reply. The status string can be accessed using `reply->str`.
The length of this string can be accessed using `reply->len`.
* **`REDIS_REPLY_ERROR`**:
* The command replied with an error. The error string can be accessed identical to `REDIS_REPLY_STATUS`.
* **`REDIS_REPLY_INTEGER`**:
* The command replied with an integer. The integer value can be accessed using the
`reply->integer` field of type `long long`.
* **`REDIS_REPLY_NIL`**:
* The command replied with a **nil** object. There is no data to access.
* **`REDIS_REPLY_STRING`**:
* A bulk (string) reply. The value of the reply can be accessed using `reply->str`.
The length of this string can be accessed using `reply->len`.
* **`REDIS_REPLY_ARRAY`**:
* A multi bulk reply. The number of elements in the multi bulk reply is stored in
`reply->elements`. Every element in the multi bulk reply is a `redisReply` object as well
and can be accessed via `reply->element[..index..]`.
Redis may reply with nested arrays but this is fully supported.
Replies should be freed using the `freeReplyObject()` function.
Note that this function will take care of freeing sub-replies objects
contained in arrays and nested arrays, so there is no need for the user to
free the sub replies (it is actually harmful and will corrupt the memory).
**Important:** the current version of hiredis (0.10.0) free's replies when the
asynchronous API is used. This means you should not call `freeReplyObject` when
you use this API. The reply is cleaned up by hiredis _after_ the callback
returns. This behavior will probably change in future releases, so make sure to
keep an eye on the changelog when upgrading (see issue #39).
### Cleaning up
To disconnect and free the context the following function can be used:
void redisFree(redisContext *c);
This function immediately closes the socket and then free's the allocations done in
creating the context.
### Sending commands (cont'd)
Together with `redisCommand`, the function `redisCommandArgv` can be used to issue commands.
It has the following prototype:
void *redisCommandArgv(redisContext *c, int argc, const char **argv, const size_t *argvlen);
It takes the number of arguments `argc`, an array of strings `argv` and the lengths of the
arguments `argvlen`. For convenience, `argvlen` may be set to `NULL` and the function will
use `strlen(3)` on every argument to determine its length. Obviously, when any of the arguments
need to be binary safe, the entire array of lengths `argvlen` should be provided.
The return value has the same semantic as `redisCommand`.
### Pipelining
To explain how Hiredis supports pipelining in a blocking connection, there needs to be
understanding of the internal execution flow.
When any of the functions in the `redisCommand` family is called, Hiredis first formats the
command according to the Redis protocol. The formatted command is then put in the output buffer
of the context. This output buffer is dynamic, so it can hold any number of commands.
After the command is put in the output buffer, `redisGetReply` is called. This function has the
following two execution paths:
1. The input buffer is non-empty:
* Try to parse a single reply from the input buffer and return it
* If no reply could be parsed, continue at *2*
2. The input buffer is empty:
* Write the **entire** output buffer to the socket
* Read from the socket until a single reply could be parsed
The function `redisGetReply` is exported as part of the Hiredis API and can be used when a reply
is expected on the socket. To pipeline commands, the only things that needs to be done is
filling up the output buffer. For this cause, two commands can be used that are identical
to the `redisCommand` family, apart from not returning a reply:
void redisAppendCommand(redisContext *c, const char *format, ...);
void redisAppendCommandArgv(redisContext *c, int argc, const char **argv, const size_t *argvlen);
After calling either function one or more times, `redisGetReply` can be used to receive the
subsequent replies. The return value for this function is either `REDIS_OK` or `REDIS_ERR`, where
the latter means an error occurred while reading a reply. Just as with the other commands,
the `err` field in the context can be used to find out what the cause of this error is.
The following examples shows a simple pipeline (resulting in only a single call to `write(2)` and
a single call to `read(2)`):
redisReply *reply;
redisAppendCommand(context,"SET foo bar");
redisAppendCommand(context,"GET foo");
redisGetReply(context,&reply); // reply for SET
freeReplyObject(reply);
redisGetReply(context,&reply); // reply for GET
freeReplyObject(reply);
This API can also be used to implement a blocking subscriber:
reply = redisCommand(context,"SUBSCRIBE foo");
freeReplyObject(reply);
while(redisGetReply(context,&reply) == REDIS_OK) {
// consume message
freeReplyObject(reply);
}
### Errors
When a function call is not successful, depending on the function either `NULL` or `REDIS_ERR` is
returned. The `err` field inside the context will be non-zero and set to one of the
following constants:
* **`REDIS_ERR_IO`**:
There was an I/O error while creating the connection, trying to write
to the socket or read from the socket. If you included `errno.h` in your
application, you can use the global `errno` variable to find out what is
wrong.
* **`REDIS_ERR_EOF`**:
The server closed the connection which resulted in an empty read.
* **`REDIS_ERR_PROTOCOL`**:
There was an error while parsing the protocol.
* **`REDIS_ERR_OTHER`**:
Any other error. Currently, it is only used when a specified hostname to connect
to cannot be resolved.
In every case, the `errstr` field in the context will be set to hold a string representation
of the error.
## Asynchronous API
Hiredis comes with an asynchronous API that works easily with any event library.
Examples are bundled that show using Hiredis with [libev](http://software.schmorp.de/pkg/libev.html)
and [libevent](http://monkey.org/~provos/libevent/).
### Connecting
The function `redisAsyncConnect` can be used to establish a non-blocking connection to
Redis. It returns a pointer to the newly created `redisAsyncContext` struct. The `err` field
should be checked after creation to see if there were errors creating the connection.
Because the connection that will be created is non-blocking, the kernel is not able to
instantly return if the specified host and port is able to accept a connection.
redisAsyncContext *c = redisAsyncConnect("127.0.0.1", 6379);
if (c->err) {
printf("Error: %s\n", c->errstr);
// handle error
}
The asynchronous context can hold a disconnect callback function that is called when the
connection is disconnected (either because of an error or per user request). This function should
have the following prototype:
void(const redisAsyncContext *c, int status);
On a disconnect, the `status` argument is set to `REDIS_OK` when disconnection was initiated by the
user, or `REDIS_ERR` when the disconnection was caused by an error. When it is `REDIS_ERR`, the `err`
field in the context can be accessed to find out the cause of the error.
The context object is always free'd after the disconnect callback fired. When a reconnect is needed,
the disconnect callback is a good point to do so.
Setting the disconnect callback can only be done once per context. For subsequent calls it will
return `REDIS_ERR`. The function to set the disconnect callback has the following prototype:
int redisAsyncSetDisconnectCallback(redisAsyncContext *ac, redisDisconnectCallback *fn);
### Sending commands and their callbacks
In an asynchronous context, commands are automatically pipelined due to the nature of an event loop.
Therefore, unlike the synchronous API, there is only a single way to send commands.
Because commands are sent to Redis asynchronously, issuing a command requires a callback function
that is called when the reply is received. Reply callbacks should have the following prototype:
void(redisAsyncContext *c, void *reply, void *privdata);
The `privdata` argument can be used to curry arbitrary data to the callback from the point where
the command is initially queued for execution.
The functions that can be used to issue commands in an asynchronous context are:
int redisAsyncCommand(
redisAsyncContext *ac, redisCallbackFn *fn, void *privdata,
const char *format, ...);
int redisAsyncCommandArgv(
redisAsyncContext *ac, redisCallbackFn *fn, void *privdata,
int argc, const char **argv, const size_t *argvlen);
Both functions work like their blocking counterparts. The return value is `REDIS_OK` when the command
was successfully added to the output buffer and `REDIS_ERR` otherwise. Example: when the connection
is being disconnected per user-request, no new commands may be added to the output buffer and `REDIS_ERR` is
returned on calls to the `redisAsyncCommand` family.
If the reply for a command with a `NULL` callback is read, it is immediately free'd. When the callback
for a command is non-`NULL`, the memory is free'd immediately following the callback: the reply is only
valid for the duration of the callback.
All pending callbacks are called with a `NULL` reply when the context encountered an error.
### Disconnecting
An asynchronous connection can be terminated using:
void redisAsyncDisconnect(redisAsyncContext *ac);
When this function is called, the connection is **not** immediately terminated. Instead, new
commands are no longer accepted and the connection is only terminated when all pending commands
have been written to the socket, their respective replies have been read and their respective
callbacks have been executed. After this, the disconnection callback is executed with the
`REDIS_OK` status and the context object is free'd.
### Hooking it up to event library *X*
There are a few hooks that need to be set on the context object after it is created.
See the `adapters/` directory for bindings to *libev* and *libevent*.
## Reply parsing API
Hiredis comes with a reply parsing API that makes it easy for writing higher
level language bindings.
The reply parsing API consists of the following functions:
redisReader *redisReaderCreate(void);
void redisReaderFree(redisReader *reader);
int redisReaderFeed(redisReader *reader, const char *buf, size_t len);
int redisReaderGetReply(redisReader *reader, void **reply);
The same set of functions are used internally by hiredis when creating a
normal Redis context, the above API just exposes it to the user for a direct
usage.
### Usage
The function `redisReaderCreate` creates a `redisReader` structure that holds a
buffer with unparsed data and state for the protocol parser.
Incoming data -- most likely from a socket -- can be placed in the internal
buffer of the `redisReader` using `redisReaderFeed`. This function will make a
copy of the buffer pointed to by `buf` for `len` bytes. This data is parsed
when `redisReaderGetReply` is called. This function returns an integer status
and a reply object (as described above) via `void **reply`. The returned status
can be either `REDIS_OK` or `REDIS_ERR`, where the latter means something went
wrong (either a protocol error, or an out of memory error).
The parser limits the level of nesting for multi bulk payloads to 7. If the
multi bulk nesting level is higher than this, the parser returns an error.
### Customizing replies
The function `redisReaderGetReply` creates `redisReply` and makes the function
argument `reply` point to the created `redisReply` variable. For instance, if
the response of type `REDIS_REPLY_STATUS` then the `str` field of `redisReply`
will hold the status as a vanilla C string. However, the functions that are
responsible for creating instances of the `redisReply` can be customized by
setting the `fn` field on the `redisReader` struct. This should be done
immediately after creating the `redisReader`.
For example, [hiredis-rb](https://github.com/pietern/hiredis-rb/blob/master/ext/hiredis_ext/reader.c)
uses customized reply object functions to create Ruby objects.
### Reader max buffer
Both when using the Reader API directly or when using it indirectly via a
normal Redis context, the redisReader structure uses a buffer in order to
accumulate data from the server.
Usually this buffer is destroyed when it is empty and is larger than 16
kb in order to avoid wasting memory in unused buffers
However when working with very big payloads destroying the buffer may slow
down performances considerably, so it is possible to modify the max size of
an idle buffer changing the value of the `maxbuf` field of the reader structure
to the desired value. The special value of 0 means that there is no maximum
value for an idle buffer, so the buffer will never get freed.
For instance if you have a normal Redis context you can set the maximum idle
buffer to zero (unlimited) just with:
context->reader->maxbuf = 0;
This should be done only in order to maximize performances when working with
large payloads. The context should be set back to `REDIS_READER_MAX_BUF` again
as soon as possible in order to prevent allocation of useless memory.
## AUTHORS
Hiredis was written by Salvatore Sanfilippo (antirez at gmail) and
Pieter Noordhuis (pcnoordhuis at gmail) and is released under the BSD license.

View File

@ -1,127 +0,0 @@
/*
* Copyright (c) 2010-2011, Pieter Noordhuis <pcnoordhuis at gmail dot com>
*
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions are met:
*
* * Redistributions of source code must retain the above copyright notice,
* this list of conditions and the following disclaimer.
* * Redistributions in binary form must reproduce the above copyright
* notice, this list of conditions and the following disclaimer in the
* documentation and/or other materials provided with the distribution.
* * Neither the name of Redis nor the names of its contributors may be used
* to endorse or promote products derived from this software without
* specific prior written permission.
*
* THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
* AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
* IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
* ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE
* LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
* CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
* SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
* INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
* CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
* ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
* POSSIBILITY OF SUCH DAMAGE.
*/
#ifndef __HIREDIS_AE_H__
#define __HIREDIS_AE_H__
#include <sys/types.h>
#include <ae.h>
#include "../hiredis.h"
#include "../async.h"
typedef struct redisAeEvents {
redisAsyncContext *context;
aeEventLoop *loop;
int fd;
int reading, writing;
} redisAeEvents;
static void redisAeReadEvent(aeEventLoop *el, int fd, void *privdata, int mask) {
((void)el); ((void)fd); ((void)mask);
redisAeEvents *e = (redisAeEvents*)privdata;
redisAsyncHandleRead(e->context);
}
static void redisAeWriteEvent(aeEventLoop *el, int fd, void *privdata, int mask) {
((void)el); ((void)fd); ((void)mask);
redisAeEvents *e = (redisAeEvents*)privdata;
redisAsyncHandleWrite(e->context);
}
static void redisAeAddRead(void *privdata) {
redisAeEvents *e = (redisAeEvents*)privdata;
aeEventLoop *loop = e->loop;
if (!e->reading) {
e->reading = 1;
aeCreateFileEvent(loop,e->fd,AE_READABLE,redisAeReadEvent,e);
}
}
static void redisAeDelRead(void *privdata) {
redisAeEvents *e = (redisAeEvents*)privdata;
aeEventLoop *loop = e->loop;
if (e->reading) {
e->reading = 0;
aeDeleteFileEvent(loop,e->fd,AE_READABLE);
}
}
static void redisAeAddWrite(void *privdata) {
redisAeEvents *e = (redisAeEvents*)privdata;
aeEventLoop *loop = e->loop;
if (!e->writing) {
e->writing = 1;
aeCreateFileEvent(loop,e->fd,AE_WRITABLE,redisAeWriteEvent,e);
}
}
static void redisAeDelWrite(void *privdata) {
redisAeEvents *e = (redisAeEvents*)privdata;
aeEventLoop *loop = e->loop;
if (e->writing) {
e->writing = 0;
aeDeleteFileEvent(loop,e->fd,AE_WRITABLE);
}
}
static void redisAeCleanup(void *privdata) {
redisAeEvents *e = (redisAeEvents*)privdata;
redisAeDelRead(privdata);
redisAeDelWrite(privdata);
free(e);
}
static int redisAeAttach(aeEventLoop *loop, redisAsyncContext *ac) {
redisContext *c = &(ac->c);
redisAeEvents *e;
/* Nothing should be attached when something is already attached */
if (ac->ev.data != NULL)
return REDIS_ERR;
/* Create container for context and r/w events */
e = (redisAeEvents*)malloc(sizeof(*e));
e->context = ac;
e->loop = loop;
e->fd = c->fd;
e->reading = e->writing = 0;
/* Register functions to start/stop listening for events */
ac->ev.addRead = redisAeAddRead;
ac->ev.delRead = redisAeDelRead;
ac->ev.addWrite = redisAeAddWrite;
ac->ev.delWrite = redisAeDelWrite;
ac->ev.cleanup = redisAeCleanup;
ac->ev.data = e;
return REDIS_OK;
}
#endif

View File

@ -1,147 +0,0 @@
/*
* Copyright (c) 2010-2011, Pieter Noordhuis <pcnoordhuis at gmail dot com>
*
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions are met:
*
* * Redistributions of source code must retain the above copyright notice,
* this list of conditions and the following disclaimer.
* * Redistributions in binary form must reproduce the above copyright
* notice, this list of conditions and the following disclaimer in the
* documentation and/or other materials provided with the distribution.
* * Neither the name of Redis nor the names of its contributors may be used
* to endorse or promote products derived from this software without
* specific prior written permission.
*
* THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
* AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
* IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
* ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE
* LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
* CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
* SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
* INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
* CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
* ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
* POSSIBILITY OF SUCH DAMAGE.
*/
#ifndef __HIREDIS_LIBEV_H__
#define __HIREDIS_LIBEV_H__
#include <stdlib.h>
#include <sys/types.h>
#include <ev.h>
#include "../hiredis.h"
#include "../async.h"
typedef struct redisLibevEvents {
redisAsyncContext *context;
struct ev_loop *loop;
int reading, writing;
ev_io rev, wev;
} redisLibevEvents;
static void redisLibevReadEvent(EV_P_ ev_io *watcher, int revents) {
#if EV_MULTIPLICITY
((void)loop);
#endif
((void)revents);
redisLibevEvents *e = (redisLibevEvents*)watcher->data;
redisAsyncHandleRead(e->context);
}
static void redisLibevWriteEvent(EV_P_ ev_io *watcher, int revents) {
#if EV_MULTIPLICITY
((void)loop);
#endif
((void)revents);
redisLibevEvents *e = (redisLibevEvents*)watcher->data;
redisAsyncHandleWrite(e->context);
}
static void redisLibevAddRead(void *privdata) {
redisLibevEvents *e = (redisLibevEvents*)privdata;
struct ev_loop *loop = e->loop;
((void)loop);
if (!e->reading) {
e->reading = 1;
ev_io_start(EV_A_ &e->rev);
}
}
static void redisLibevDelRead(void *privdata) {
redisLibevEvents *e = (redisLibevEvents*)privdata;
struct ev_loop *loop = e->loop;
((void)loop);
if (e->reading) {
e->reading = 0;
ev_io_stop(EV_A_ &e->rev);
}
}
static void redisLibevAddWrite(void *privdata) {
redisLibevEvents *e = (redisLibevEvents*)privdata;
struct ev_loop *loop = e->loop;
((void)loop);
if (!e->writing) {
e->writing = 1;
ev_io_start(EV_A_ &e->wev);
}
}
static void redisLibevDelWrite(void *privdata) {
redisLibevEvents *e = (redisLibevEvents*)privdata;
struct ev_loop *loop = e->loop;
((void)loop);
if (e->writing) {
e->writing = 0;
ev_io_stop(EV_A_ &e->wev);
}
}
static void redisLibevCleanup(void *privdata) {
redisLibevEvents *e = (redisLibevEvents*)privdata;
redisLibevDelRead(privdata);
redisLibevDelWrite(privdata);
free(e);
}
static int redisLibevAttach(EV_P_ redisAsyncContext *ac) {
redisContext *c = &(ac->c);
redisLibevEvents *e;
/* Nothing should be attached when something is already attached */
if (ac->ev.data != NULL)
return REDIS_ERR;
/* Create container for context and r/w events */
e = (redisLibevEvents*)malloc(sizeof(*e));
e->context = ac;
#if EV_MULTIPLICITY
e->loop = loop;
#else
e->loop = NULL;
#endif
e->reading = e->writing = 0;
e->rev.data = e;
e->wev.data = e;
/* Register functions to start/stop listening for events */
ac->ev.addRead = redisLibevAddRead;
ac->ev.delRead = redisLibevDelRead;
ac->ev.addWrite = redisLibevAddWrite;
ac->ev.delWrite = redisLibevDelWrite;
ac->ev.cleanup = redisLibevCleanup;
ac->ev.data = e;
/* Initialize read/write events */
ev_io_init(&e->rev,redisLibevReadEvent,c->fd,EV_READ);
ev_io_init(&e->wev,redisLibevWriteEvent,c->fd,EV_WRITE);
return REDIS_OK;
}
#endif

View File

@ -1,108 +0,0 @@
/*
* Copyright (c) 2010-2011, Pieter Noordhuis <pcnoordhuis at gmail dot com>
*
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions are met:
*
* * Redistributions of source code must retain the above copyright notice,
* this list of conditions and the following disclaimer.
* * Redistributions in binary form must reproduce the above copyright
* notice, this list of conditions and the following disclaimer in the
* documentation and/or other materials provided with the distribution.
* * Neither the name of Redis nor the names of its contributors may be used
* to endorse or promote products derived from this software without
* specific prior written permission.
*
* THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
* AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
* IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
* ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE
* LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
* CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
* SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
* INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
* CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
* ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
* POSSIBILITY OF SUCH DAMAGE.
*/
#ifndef __HIREDIS_LIBEVENT_H__
#define __HIREDIS_LIBEVENT_H__
#include <event.h>
#include "../hiredis.h"
#include "../async.h"
typedef struct redisLibeventEvents {
redisAsyncContext *context;
struct event rev, wev;
} redisLibeventEvents;
static void redisLibeventReadEvent(int fd, short event, void *arg) {
((void)fd); ((void)event);
redisLibeventEvents *e = (redisLibeventEvents*)arg;
redisAsyncHandleRead(e->context);
}
static void redisLibeventWriteEvent(int fd, short event, void *arg) {
((void)fd); ((void)event);
redisLibeventEvents *e = (redisLibeventEvents*)arg;
redisAsyncHandleWrite(e->context);
}
static void redisLibeventAddRead(void *privdata) {
redisLibeventEvents *e = (redisLibeventEvents*)privdata;
event_add(&e->rev,NULL);
}
static void redisLibeventDelRead(void *privdata) {
redisLibeventEvents *e = (redisLibeventEvents*)privdata;
event_del(&e->rev);
}
static void redisLibeventAddWrite(void *privdata) {
redisLibeventEvents *e = (redisLibeventEvents*)privdata;
event_add(&e->wev,NULL);
}
static void redisLibeventDelWrite(void *privdata) {
redisLibeventEvents *e = (redisLibeventEvents*)privdata;
event_del(&e->wev);
}
static void redisLibeventCleanup(void *privdata) {
redisLibeventEvents *e = (redisLibeventEvents*)privdata;
event_del(&e->rev);
event_del(&e->wev);
free(e);
}
static int redisLibeventAttach(redisAsyncContext *ac, struct event_base *base) {
redisContext *c = &(ac->c);
redisLibeventEvents *e;
/* Nothing should be attached when something is already attached */
if (ac->ev.data != NULL)
return REDIS_ERR;
/* Create container for context and r/w events */
e = (redisLibeventEvents*)malloc(sizeof(*e));
e->context = ac;
/* Register functions to start/stop listening for events */
ac->ev.addRead = redisLibeventAddRead;
ac->ev.delRead = redisLibeventDelRead;
ac->ev.addWrite = redisLibeventAddWrite;
ac->ev.delWrite = redisLibeventDelWrite;
ac->ev.cleanup = redisLibeventCleanup;
ac->ev.data = e;
/* Initialize and install read/write events */
event_set(&e->rev,c->fd,EV_READ,redisLibeventReadEvent,e);
event_set(&e->wev,c->fd,EV_WRITE,redisLibeventWriteEvent,e);
event_base_set(base,&e->rev);
event_base_set(base,&e->wev);
return REDIS_OK;
}
#endif

View File

@ -1,121 +0,0 @@
#ifndef __HIREDIS_LIBUV_H__
#define __HIREDIS_LIBUV_H__
#include <uv.h>
#include "../hiredis.h"
#include "../async.h"
#include <string.h>
typedef struct redisLibuvEvents {
redisAsyncContext* context;
uv_poll_t handle;
int events;
} redisLibuvEvents;
int redisLibuvAttach(redisAsyncContext*, uv_loop_t*);
static void redisLibuvPoll(uv_poll_t* handle, int status, int events) {
redisLibuvEvents* p = (redisLibuvEvents*)handle->data;
if (status != 0) {
return;
}
if (events & UV_READABLE) {
redisAsyncHandleRead(p->context);
}
if (events & UV_WRITABLE) {
redisAsyncHandleWrite(p->context);
}
}
static void redisLibuvAddRead(void *privdata) {
redisLibuvEvents* p = (redisLibuvEvents*)privdata;
p->events |= UV_READABLE;
uv_poll_start(&p->handle, p->events, redisLibuvPoll);
}
static void redisLibuvDelRead(void *privdata) {
redisLibuvEvents* p = (redisLibuvEvents*)privdata;
p->events &= ~UV_READABLE;
if (p->events) {
uv_poll_start(&p->handle, p->events, redisLibuvPoll);
} else {
uv_poll_stop(&p->handle);
}
}
static void redisLibuvAddWrite(void *privdata) {
redisLibuvEvents* p = (redisLibuvEvents*)privdata;
p->events |= UV_WRITABLE;
uv_poll_start(&p->handle, p->events, redisLibuvPoll);
}
static void redisLibuvDelWrite(void *privdata) {
redisLibuvEvents* p = (redisLibuvEvents*)privdata;
p->events &= ~UV_WRITABLE;
if (p->events) {
uv_poll_start(&p->handle, p->events, redisLibuvPoll);
} else {
uv_poll_stop(&p->handle);
}
}
static void on_close(uv_handle_t* handle) {
redisLibuvEvents* p = (redisLibuvEvents*)handle->data;
free(p);
}
static void redisLibuvCleanup(void *privdata) {
redisLibuvEvents* p = (redisLibuvEvents*)privdata;
uv_close((uv_handle_t*)&p->handle, on_close);
}
static int redisLibuvAttach(redisAsyncContext* ac, uv_loop_t* loop) {
redisContext *c = &(ac->c);
if (ac->ev.data != NULL) {
return REDIS_ERR;
}
ac->ev.addRead = redisLibuvAddRead;
ac->ev.delRead = redisLibuvDelRead;
ac->ev.addWrite = redisLibuvAddWrite;
ac->ev.delWrite = redisLibuvDelWrite;
ac->ev.cleanup = redisLibuvCleanup;
redisLibuvEvents* p = (redisLibuvEvents*)malloc(sizeof(*p));
if (!p) {
return REDIS_ERR;
}
memset(p, 0, sizeof(*p));
if (uv_poll_init(loop, &p->handle, c->fd) != 0) {
return REDIS_ERR;
}
ac->ev.data = p;
p->handle.data = p;
p->context = ac;
return REDIS_OK;
}
#endif

View File

@ -1,662 +0,0 @@
/*
* Copyright (c) 2009-2011, Salvatore Sanfilippo <antirez at gmail dot com>
* Copyright (c) 2010-2011, Pieter Noordhuis <pcnoordhuis at gmail dot com>
*
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions are met:
*
* * Redistributions of source code must retain the above copyright notice,
* this list of conditions and the following disclaimer.
* * Redistributions in binary form must reproduce the above copyright
* notice, this list of conditions and the following disclaimer in the
* documentation and/or other materials provided with the distribution.
* * Neither the name of Redis nor the names of its contributors may be used
* to endorse or promote products derived from this software without
* specific prior written permission.
*
* THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
* AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
* IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
* ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE
* LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
* CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
* SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
* INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
* CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
* ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
* POSSIBILITY OF SUCH DAMAGE.
*/
#include "fmacros.h"
#include <stdlib.h>
#include <string.h>
#include <strings.h>
#include <assert.h>
#include <ctype.h>
#include <errno.h>
#include "async.h"
#include "net.h"
#include "dict.c"
#include "sds.h"
#define _EL_ADD_READ(ctx) do { \
if ((ctx)->ev.addRead) (ctx)->ev.addRead((ctx)->ev.data); \
} while(0)
#define _EL_DEL_READ(ctx) do { \
if ((ctx)->ev.delRead) (ctx)->ev.delRead((ctx)->ev.data); \
} while(0)
#define _EL_ADD_WRITE(ctx) do { \
if ((ctx)->ev.addWrite) (ctx)->ev.addWrite((ctx)->ev.data); \
} while(0)
#define _EL_DEL_WRITE(ctx) do { \
if ((ctx)->ev.delWrite) (ctx)->ev.delWrite((ctx)->ev.data); \
} while(0)
#define _EL_CLEANUP(ctx) do { \
if ((ctx)->ev.cleanup) (ctx)->ev.cleanup((ctx)->ev.data); \
} while(0);
/* Forward declaration of function in hiredis.c */
void __redisAppendCommand(redisContext *c, char *cmd, size_t len);
/* Functions managing dictionary of callbacks for pub/sub. */
static unsigned int callbackHash(const void *key) {
return dictGenHashFunction((const unsigned char *)key,
sdslen((const sds)key));
}
static void *callbackValDup(void *privdata, const void *src) {
((void) privdata);
redisCallback *dup = malloc(sizeof(*dup));
memcpy(dup,src,sizeof(*dup));
return dup;
}
static int callbackKeyCompare(void *privdata, const void *key1, const void *key2) {
int l1, l2;
((void) privdata);
l1 = sdslen((const sds)key1);
l2 = sdslen((const sds)key2);
if (l1 != l2) return 0;
return memcmp(key1,key2,l1) == 0;
}
static void callbackKeyDestructor(void *privdata, void *key) {
((void) privdata);
sdsfree((sds)key);
}
static void callbackValDestructor(void *privdata, void *val) {
((void) privdata);
free(val);
}
static dictType callbackDict = {
callbackHash,
NULL,
callbackValDup,
callbackKeyCompare,
callbackKeyDestructor,
callbackValDestructor
};
static redisAsyncContext *redisAsyncInitialize(redisContext *c) {
redisAsyncContext *ac;
ac = realloc(c,sizeof(redisAsyncContext));
if (ac == NULL)
return NULL;
c = &(ac->c);
/* The regular connect functions will always set the flag REDIS_CONNECTED.
* For the async API, we want to wait until the first write event is
* received up before setting this flag, so reset it here. */
c->flags &= ~REDIS_CONNECTED;
ac->err = 0;
ac->errstr = NULL;
ac->data = NULL;
ac->ev.data = NULL;
ac->ev.addRead = NULL;
ac->ev.delRead = NULL;
ac->ev.addWrite = NULL;
ac->ev.delWrite = NULL;
ac->ev.cleanup = NULL;
ac->onConnect = NULL;
ac->onDisconnect = NULL;
ac->replies.head = NULL;
ac->replies.tail = NULL;
ac->sub.invalid.head = NULL;
ac->sub.invalid.tail = NULL;
ac->sub.channels = dictCreate(&callbackDict,NULL);
ac->sub.patterns = dictCreate(&callbackDict,NULL);
return ac;
}
/* We want the error field to be accessible directly instead of requiring
* an indirection to the redisContext struct. */
static void __redisAsyncCopyError(redisAsyncContext *ac) {
redisContext *c = &(ac->c);
ac->err = c->err;
ac->errstr = c->errstr;
}
redisAsyncContext *redisAsyncConnect(const char *ip, int port) {
redisContext *c;
redisAsyncContext *ac;
c = redisConnectNonBlock(ip,port);
if (c == NULL)
return NULL;
ac = redisAsyncInitialize(c);
if (ac == NULL) {
redisFree(c);
return NULL;
}
__redisAsyncCopyError(ac);
return ac;
}
redisAsyncContext *redisAsyncConnectBind(const char *ip, int port,
const char *source_addr) {
redisContext *c = redisConnectBindNonBlock(ip,port,source_addr);
redisAsyncContext *ac = redisAsyncInitialize(c);
__redisAsyncCopyError(ac);
return ac;
}
redisAsyncContext *redisAsyncConnectUnix(const char *path) {
redisContext *c;
redisAsyncContext *ac;
c = redisConnectUnixNonBlock(path);
if (c == NULL)
return NULL;
ac = redisAsyncInitialize(c);
if (ac == NULL) {
redisFree(c);
return NULL;
}
__redisAsyncCopyError(ac);
return ac;
}
int redisAsyncSetConnectCallback(redisAsyncContext *ac, redisConnectCallback *fn) {
if (ac->onConnect == NULL) {
ac->onConnect = fn;
/* The common way to detect an established connection is to wait for
* the first write event to be fired. This assumes the related event
* library functions are already set. */
_EL_ADD_WRITE(ac);
return REDIS_OK;
}
return REDIS_ERR;
}
int redisAsyncSetDisconnectCallback(redisAsyncContext *ac, redisDisconnectCallback *fn) {
if (ac->onDisconnect == NULL) {
ac->onDisconnect = fn;
return REDIS_OK;
}
return REDIS_ERR;
}
/* Helper functions to push/shift callbacks */
static int __redisPushCallback(redisCallbackList *list, redisCallback *source) {
redisCallback *cb;
/* Copy callback from stack to heap */
cb = malloc(sizeof(*cb));
if (cb == NULL)
return REDIS_ERR_OOM;
if (source != NULL) {
memcpy(cb,source,sizeof(*cb));
cb->next = NULL;
}
/* Store callback in list */
if (list->head == NULL)
list->head = cb;
if (list->tail != NULL)
list->tail->next = cb;
list->tail = cb;
return REDIS_OK;
}
static int __redisShiftCallback(redisCallbackList *list, redisCallback *target) {
redisCallback *cb = list->head;
if (cb != NULL) {
list->head = cb->next;
if (cb == list->tail)
list->tail = NULL;
/* Copy callback from heap to stack */
if (target != NULL)
memcpy(target,cb,sizeof(*cb));
free(cb);
return REDIS_OK;
}
return REDIS_ERR;
}
static void __redisRunCallback(redisAsyncContext *ac, redisCallback *cb, redisReply *reply) {
redisContext *c = &(ac->c);
if (cb->fn != NULL) {
c->flags |= REDIS_IN_CALLBACK;
cb->fn(ac,reply,cb->privdata);
c->flags &= ~REDIS_IN_CALLBACK;
}
}
/* Helper function to free the context. */
static void __redisAsyncFree(redisAsyncContext *ac) {
redisContext *c = &(ac->c);
redisCallback cb;
dictIterator *it;
dictEntry *de;
/* Execute pending callbacks with NULL reply. */
while (__redisShiftCallback(&ac->replies,&cb) == REDIS_OK)
__redisRunCallback(ac,&cb,NULL);
/* Execute callbacks for invalid commands */
while (__redisShiftCallback(&ac->sub.invalid,&cb) == REDIS_OK)
__redisRunCallback(ac,&cb,NULL);
/* Run subscription callbacks callbacks with NULL reply */
it = dictGetIterator(ac->sub.channels);
while ((de = dictNext(it)) != NULL)
__redisRunCallback(ac,dictGetEntryVal(de),NULL);
dictReleaseIterator(it);
dictRelease(ac->sub.channels);
it = dictGetIterator(ac->sub.patterns);
while ((de = dictNext(it)) != NULL)
__redisRunCallback(ac,dictGetEntryVal(de),NULL);
dictReleaseIterator(it);
dictRelease(ac->sub.patterns);
/* Signal event lib to clean up */
_EL_CLEANUP(ac);
/* Execute disconnect callback. When redisAsyncFree() initiated destroying
* this context, the status will always be REDIS_OK. */
if (ac->onDisconnect && (c->flags & REDIS_CONNECTED)) {
if (c->flags & REDIS_FREEING) {
ac->onDisconnect(ac,REDIS_OK);
} else {
ac->onDisconnect(ac,(ac->err == 0) ? REDIS_OK : REDIS_ERR);
}
}
/* Cleanup self */
redisFree(c);
}
/* Free the async context. When this function is called from a callback,
* control needs to be returned to redisProcessCallbacks() before actual
* free'ing. To do so, a flag is set on the context which is picked up by
* redisProcessCallbacks(). Otherwise, the context is immediately free'd. */
void redisAsyncFree(redisAsyncContext *ac) {
redisContext *c = &(ac->c);
c->flags |= REDIS_FREEING;
if (!(c->flags & REDIS_IN_CALLBACK))
__redisAsyncFree(ac);
}
/* Helper function to make the disconnect happen and clean up. */
static void __redisAsyncDisconnect(redisAsyncContext *ac) {
redisContext *c = &(ac->c);
/* Make sure error is accessible if there is any */
__redisAsyncCopyError(ac);
if (ac->err == 0) {
/* For clean disconnects, there should be no pending callbacks. */
assert(__redisShiftCallback(&ac->replies,NULL) == REDIS_ERR);
} else {
/* Disconnection is caused by an error, make sure that pending
* callbacks cannot call new commands. */
c->flags |= REDIS_DISCONNECTING;
}
/* For non-clean disconnects, __redisAsyncFree() will execute pending
* callbacks with a NULL-reply. */
__redisAsyncFree(ac);
}
/* Tries to do a clean disconnect from Redis, meaning it stops new commands
* from being issued, but tries to flush the output buffer and execute
* callbacks for all remaining replies. When this function is called from a
* callback, there might be more replies and we can safely defer disconnecting
* to redisProcessCallbacks(). Otherwise, we can only disconnect immediately
* when there are no pending callbacks. */
void redisAsyncDisconnect(redisAsyncContext *ac) {
redisContext *c = &(ac->c);
c->flags |= REDIS_DISCONNECTING;
if (!(c->flags & REDIS_IN_CALLBACK) && ac->replies.head == NULL)
__redisAsyncDisconnect(ac);
}
static int __redisGetSubscribeCallback(redisAsyncContext *ac, redisReply *reply, redisCallback *dstcb) {
redisContext *c = &(ac->c);
dict *callbacks;
dictEntry *de;
int pvariant;
char *stype;
sds sname;
/* Custom reply functions are not supported for pub/sub. This will fail
* very hard when they are used... */
if (reply->type == REDIS_REPLY_ARRAY) {
assert(reply->elements >= 2);
assert(reply->element[0]->type == REDIS_REPLY_STRING);
stype = reply->element[0]->str;
pvariant = (tolower(stype[0]) == 'p') ? 1 : 0;
if (pvariant)
callbacks = ac->sub.patterns;
else
callbacks = ac->sub.channels;
/* Locate the right callback */
assert(reply->element[1]->type == REDIS_REPLY_STRING);
sname = sdsnewlen(reply->element[1]->str,reply->element[1]->len);
de = dictFind(callbacks,sname);
if (de != NULL) {
memcpy(dstcb,dictGetEntryVal(de),sizeof(*dstcb));
/* If this is an unsubscribe message, remove it. */
if (strcasecmp(stype+pvariant,"unsubscribe") == 0) {
dictDelete(callbacks,sname);
/* If this was the last unsubscribe message, revert to
* non-subscribe mode. */
assert(reply->element[2]->type == REDIS_REPLY_INTEGER);
if (reply->element[2]->integer == 0)
c->flags &= ~REDIS_SUBSCRIBED;
}
}
sdsfree(sname);
} else {
/* Shift callback for invalid commands. */
__redisShiftCallback(&ac->sub.invalid,dstcb);
}
return REDIS_OK;
}
void redisProcessCallbacks(redisAsyncContext *ac) {
redisContext *c = &(ac->c);
redisCallback cb = {NULL, NULL, NULL};
void *reply = NULL;
int status;
while((status = redisGetReply(c,&reply)) == REDIS_OK) {
if (reply == NULL) {
/* When the connection is being disconnected and there are
* no more replies, this is the cue to really disconnect. */
if (c->flags & REDIS_DISCONNECTING && sdslen(c->obuf) == 0) {
__redisAsyncDisconnect(ac);
return;
}
/* If monitor mode, repush callback */
if(c->flags & REDIS_MONITORING) {
__redisPushCallback(&ac->replies,&cb);
}
/* When the connection is not being disconnected, simply stop
* trying to get replies and wait for the next loop tick. */
break;
}
/* Even if the context is subscribed, pending regular callbacks will
* get a reply before pub/sub messages arrive. */
if (__redisShiftCallback(&ac->replies,&cb) != REDIS_OK) {
/*
* A spontaneous reply in a not-subscribed context can be the error
* reply that is sent when a new connection exceeds the maximum
* number of allowed connections on the server side.
*
* This is seen as an error instead of a regular reply because the
* server closes the connection after sending it.
*
* To prevent the error from being overwritten by an EOF error the
* connection is closed here. See issue #43.
*
* Another possibility is that the server is loading its dataset.
* In this case we also want to close the connection, and have the
* user wait until the server is ready to take our request.
*/
if (((redisReply*)reply)->type == REDIS_REPLY_ERROR) {
c->err = REDIS_ERR_OTHER;
snprintf(c->errstr,sizeof(c->errstr),"%s",((redisReply*)reply)->str);
c->reader->fn->freeObject(reply);
__redisAsyncDisconnect(ac);
return;
}
/* No more regular callbacks and no errors, the context *must* be subscribed or monitoring. */
assert((c->flags & REDIS_SUBSCRIBED || c->flags & REDIS_MONITORING));
if(c->flags & REDIS_SUBSCRIBED)
__redisGetSubscribeCallback(ac,reply,&cb);
}
if (cb.fn != NULL) {
__redisRunCallback(ac,&cb,reply);
c->reader->fn->freeObject(reply);
/* Proceed with free'ing when redisAsyncFree() was called. */
if (c->flags & REDIS_FREEING) {
__redisAsyncFree(ac);
return;
}
} else {
/* No callback for this reply. This can either be a NULL callback,
* or there were no callbacks to begin with. Either way, don't
* abort with an error, but simply ignore it because the client
* doesn't know what the server will spit out over the wire. */
c->reader->fn->freeObject(reply);
}
}
/* Disconnect when there was an error reading the reply */
if (status != REDIS_OK)
__redisAsyncDisconnect(ac);
}
/* Internal helper function to detect socket status the first time a read or
* write event fires. When connecting was not succesful, the connect callback
* is called with a REDIS_ERR status and the context is free'd. */
static int __redisAsyncHandleConnect(redisAsyncContext *ac) {
redisContext *c = &(ac->c);
if (redisCheckSocketError(c) == REDIS_ERR) {
/* Try again later when connect(2) is still in progress. */
if (errno == EINPROGRESS)
return REDIS_OK;
if (ac->onConnect) ac->onConnect(ac,REDIS_ERR);
__redisAsyncDisconnect(ac);
return REDIS_ERR;
}
/* Mark context as connected. */
c->flags |= REDIS_CONNECTED;
if (ac->onConnect) ac->onConnect(ac,REDIS_OK);
return REDIS_OK;
}
/* This function should be called when the socket is readable.
* It processes all replies that can be read and executes their callbacks.
*/
void redisAsyncHandleRead(redisAsyncContext *ac) {
redisContext *c = &(ac->c);
if (!(c->flags & REDIS_CONNECTED)) {
/* Abort connect was not successful. */
if (__redisAsyncHandleConnect(ac) != REDIS_OK)
return;
/* Try again later when the context is still not connected. */
if (!(c->flags & REDIS_CONNECTED))
return;
}
if (redisBufferRead(c) == REDIS_ERR) {
__redisAsyncDisconnect(ac);
} else {
/* Always re-schedule reads */
_EL_ADD_READ(ac);
redisProcessCallbacks(ac);
}
}
void redisAsyncHandleWrite(redisAsyncContext *ac) {
redisContext *c = &(ac->c);
int done = 0;
if (!(c->flags & REDIS_CONNECTED)) {
/* Abort connect was not successful. */
if (__redisAsyncHandleConnect(ac) != REDIS_OK)
return;
/* Try again later when the context is still not connected. */
if (!(c->flags & REDIS_CONNECTED))
return;
}
if (redisBufferWrite(c,&done) == REDIS_ERR) {
__redisAsyncDisconnect(ac);
} else {
/* Continue writing when not done, stop writing otherwise */
if (!done)
_EL_ADD_WRITE(ac);
else
_EL_DEL_WRITE(ac);
/* Always schedule reads after writes */
_EL_ADD_READ(ac);
}
}
/* Sets a pointer to the first argument and its length starting at p. Returns
* the number of bytes to skip to get to the following argument. */
static char *nextArgument(char *start, char **str, size_t *len) {
char *p = start;
if (p[0] != '$') {
p = strchr(p,'$');
if (p == NULL) return NULL;
}
*len = (int)strtol(p+1,NULL,10);
p = strchr(p,'\r');
assert(p);
*str = p+2;
return p+2+(*len)+2;
}
/* Helper function for the redisAsyncCommand* family of functions. Writes a
* formatted command to the output buffer and registers the provided callback
* function with the context. */
static int __redisAsyncCommand(redisAsyncContext *ac, redisCallbackFn *fn, void *privdata, char *cmd, size_t len) {
redisContext *c = &(ac->c);
redisCallback cb;
int pvariant, hasnext;
char *cstr, *astr;
size_t clen, alen;
char *p;
sds sname;
/* Don't accept new commands when the connection is about to be closed. */
if (c->flags & (REDIS_DISCONNECTING | REDIS_FREEING)) return REDIS_ERR;
/* Setup callback */
cb.fn = fn;
cb.privdata = privdata;
/* Find out which command will be appended. */
p = nextArgument(cmd,&cstr,&clen);
assert(p != NULL);
hasnext = (p[0] == '$');
pvariant = (tolower(cstr[0]) == 'p') ? 1 : 0;
cstr += pvariant;
clen -= pvariant;
if (hasnext && strncasecmp(cstr,"subscribe\r\n",11) == 0) {
c->flags |= REDIS_SUBSCRIBED;
/* Add every channel/pattern to the list of subscription callbacks. */
while ((p = nextArgument(p,&astr,&alen)) != NULL) {
sname = sdsnewlen(astr,alen);
if (pvariant)
dictReplace(ac->sub.patterns,sname,&cb);
else
dictReplace(ac->sub.channels,sname,&cb);
}
} else if (strncasecmp(cstr,"unsubscribe\r\n",13) == 0) {
/* It is only useful to call (P)UNSUBSCRIBE when the context is
* subscribed to one or more channels or patterns. */
if (!(c->flags & REDIS_SUBSCRIBED)) return REDIS_ERR;
/* (P)UNSUBSCRIBE does not have its own response: every channel or
* pattern that is unsubscribed will receive a message. This means we
* should not append a callback function for this command. */
} else if(strncasecmp(cstr,"monitor\r\n",9) == 0) {
/* Set monitor flag and push callback */
c->flags |= REDIS_MONITORING;
__redisPushCallback(&ac->replies,&cb);
} else {
if (c->flags & REDIS_SUBSCRIBED)
/* This will likely result in an error reply, but it needs to be
* received and passed to the callback. */
__redisPushCallback(&ac->sub.invalid,&cb);
else
__redisPushCallback(&ac->replies,&cb);
}
__redisAppendCommand(c,cmd,len);
/* Always schedule a write when the write buffer is non-empty */
_EL_ADD_WRITE(ac);
return REDIS_OK;
}
int redisvAsyncCommand(redisAsyncContext *ac, redisCallbackFn *fn, void *privdata, const char *format, va_list ap) {
char *cmd;
int len;
int status;
len = redisvFormatCommand(&cmd,format,ap);
status = __redisAsyncCommand(ac,fn,privdata,cmd,len);
free(cmd);
return status;
}
int redisAsyncCommand(redisAsyncContext *ac, redisCallbackFn *fn, void *privdata, const char *format, ...) {
va_list ap;
int status;
va_start(ap,format);
status = redisvAsyncCommand(ac,fn,privdata,format,ap);
va_end(ap);
return status;
}
int redisAsyncCommandArgv(redisAsyncContext *ac, redisCallbackFn *fn, void *privdata, int argc, const char **argv, const size_t *argvlen) {
char *cmd;
int len;
int status;
len = redisFormatCommandArgv(&cmd,argc,argv,argvlen);
status = __redisAsyncCommand(ac,fn,privdata,cmd,len);
free(cmd);
return status;
}

View File

@ -1,126 +0,0 @@
/*
* Copyright (c) 2009-2011, Salvatore Sanfilippo <antirez at gmail dot com>
* Copyright (c) 2010-2011, Pieter Noordhuis <pcnoordhuis at gmail dot com>
*
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions are met:
*
* * Redistributions of source code must retain the above copyright notice,
* this list of conditions and the following disclaimer.
* * Redistributions in binary form must reproduce the above copyright
* notice, this list of conditions and the following disclaimer in the
* documentation and/or other materials provided with the distribution.
* * Neither the name of Redis nor the names of its contributors may be used
* to endorse or promote products derived from this software without
* specific prior written permission.
*
* THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
* AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
* IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
* ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE
* LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
* CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
* SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
* INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
* CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
* ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
* POSSIBILITY OF SUCH DAMAGE.
*/
#ifndef __HIREDIS_ASYNC_H
#define __HIREDIS_ASYNC_H
#include "hiredis.h"
#ifdef __cplusplus
extern "C" {
#endif
struct redisAsyncContext; /* need forward declaration of redisAsyncContext */
struct dict; /* dictionary header is included in async.c */
/* Reply callback prototype and container */
typedef void (redisCallbackFn)(struct redisAsyncContext*, void*, void*);
typedef struct redisCallback {
struct redisCallback *next; /* simple singly linked list */
redisCallbackFn *fn;
void *privdata;
} redisCallback;
/* List of callbacks for either regular replies or pub/sub */
typedef struct redisCallbackList {
redisCallback *head, *tail;
} redisCallbackList;
/* Connection callback prototypes */
typedef void (redisDisconnectCallback)(const struct redisAsyncContext*, int status);
typedef void (redisConnectCallback)(const struct redisAsyncContext*, int status);
/* Context for an async connection to Redis */
typedef struct redisAsyncContext {
/* Hold the regular context, so it can be realloc'ed. */
redisContext c;
/* Setup error flags so they can be used directly. */
int err;
char *errstr;
/* Not used by hiredis */
void *data;
/* Event library data and hooks */
struct {
void *data;
/* Hooks that are called when the library expects to start
* reading/writing. These functions should be idempotent. */
void (*addRead)(void *privdata);
void (*delRead)(void *privdata);
void (*addWrite)(void *privdata);
void (*delWrite)(void *privdata);
void (*cleanup)(void *privdata);
} ev;
/* Called when either the connection is terminated due to an error or per
* user request. The status is set accordingly (REDIS_OK, REDIS_ERR). */
redisDisconnectCallback *onDisconnect;
/* Called when the first write event was received. */
redisConnectCallback *onConnect;
/* Regular command callbacks */
redisCallbackList replies;
/* Subscription callbacks */
struct {
redisCallbackList invalid;
struct dict *channels;
struct dict *patterns;
} sub;
} redisAsyncContext;
/* Functions that proxy to hiredis */
redisAsyncContext *redisAsyncConnect(const char *ip, int port);
redisAsyncContext *redisAsyncConnectBind(const char *ip, int port, const char *source_addr);
redisAsyncContext *redisAsyncConnectUnix(const char *path);
int redisAsyncSetConnectCallback(redisAsyncContext *ac, redisConnectCallback *fn);
int redisAsyncSetDisconnectCallback(redisAsyncContext *ac, redisDisconnectCallback *fn);
void redisAsyncDisconnect(redisAsyncContext *ac);
void redisAsyncFree(redisAsyncContext *ac);
/* Handle read/write events */
void redisAsyncHandleRead(redisAsyncContext *ac);
void redisAsyncHandleWrite(redisAsyncContext *ac);
/* Command functions for an async context. Write the command to the
* output buffer and register the provided callback. */
int redisvAsyncCommand(redisAsyncContext *ac, redisCallbackFn *fn, void *privdata, const char *format, va_list ap);
int redisAsyncCommand(redisAsyncContext *ac, redisCallbackFn *fn, void *privdata, const char *format, ...);
int redisAsyncCommandArgv(redisAsyncContext *ac, redisCallbackFn *fn, void *privdata, int argc, const char **argv, const size_t *argvlen);
#ifdef __cplusplus
}
#endif
#endif

View File

@ -1,338 +0,0 @@
/* Hash table implementation.
*
* This file implements in memory hash tables with insert/del/replace/find/
* get-random-element operations. Hash tables will auto resize if needed
* tables of power of two in size are used, collisions are handled by
* chaining. See the source code for more information... :)
*
* Copyright (c) 2006-2010, Salvatore Sanfilippo <antirez at gmail dot com>
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions are met:
*
* * Redistributions of source code must retain the above copyright notice,
* this list of conditions and the following disclaimer.
* * Redistributions in binary form must reproduce the above copyright
* notice, this list of conditions and the following disclaimer in the
* documentation and/or other materials provided with the distribution.
* * Neither the name of Redis nor the names of its contributors may be used
* to endorse or promote products derived from this software without
* specific prior written permission.
*
* THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
* AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
* IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
* ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE
* LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
* CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
* SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
* INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
* CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
* ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
* POSSIBILITY OF SUCH DAMAGE.
*/
#include "fmacros.h"
#include <stdlib.h>
#include <assert.h>
#include <limits.h>
#include "dict.h"
/* -------------------------- private prototypes ---------------------------- */
static int _dictExpandIfNeeded(dict *ht);
static unsigned long _dictNextPower(unsigned long size);
static int _dictKeyIndex(dict *ht, const void *key);
static int _dictInit(dict *ht, dictType *type, void *privDataPtr);
/* -------------------------- hash functions -------------------------------- */
/* Generic hash function (a popular one from Bernstein).
* I tested a few and this was the best. */
static unsigned int dictGenHashFunction(const unsigned char *buf, int len) {
unsigned int hash = 5381;
while (len--)
hash = ((hash << 5) + hash) + (*buf++); /* hash * 33 + c */
return hash;
}
/* ----------------------------- API implementation ------------------------- */
/* Reset an hashtable already initialized with ht_init().
* NOTE: This function should only called by ht_destroy(). */
static void _dictReset(dict *ht) {
ht->table = NULL;
ht->size = 0;
ht->sizemask = 0;
ht->used = 0;
}
/* Create a new hash table */
static dict *dictCreate(dictType *type, void *privDataPtr) {
dict *ht = malloc(sizeof(*ht));
_dictInit(ht,type,privDataPtr);
return ht;
}
/* Initialize the hash table */
static int _dictInit(dict *ht, dictType *type, void *privDataPtr) {
_dictReset(ht);
ht->type = type;
ht->privdata = privDataPtr;
return DICT_OK;
}
/* Expand or create the hashtable */
static int dictExpand(dict *ht, unsigned long size) {
dict n; /* the new hashtable */
unsigned long realsize = _dictNextPower(size), i;
/* the size is invalid if it is smaller than the number of
* elements already inside the hashtable */
if (ht->used > size)
return DICT_ERR;
_dictInit(&n, ht->type, ht->privdata);
n.size = realsize;
n.sizemask = realsize-1;
n.table = calloc(realsize,sizeof(dictEntry*));
/* Copy all the elements from the old to the new table:
* note that if the old hash table is empty ht->size is zero,
* so dictExpand just creates an hash table. */
n.used = ht->used;
for (i = 0; i < ht->size && ht->used > 0; i++) {
dictEntry *he, *nextHe;
if (ht->table[i] == NULL) continue;
/* For each hash entry on this slot... */
he = ht->table[i];
while(he) {
unsigned int h;
nextHe = he->next;
/* Get the new element index */
h = dictHashKey(ht, he->key) & n.sizemask;
he->next = n.table[h];
n.table[h] = he;
ht->used--;
/* Pass to the next element */
he = nextHe;
}
}
assert(ht->used == 0);
free(ht->table);
/* Remap the new hashtable in the old */
*ht = n;
return DICT_OK;
}
/* Add an element to the target hash table */
static int dictAdd(dict *ht, void *key, void *val) {
int index;
dictEntry *entry;
/* Get the index of the new element, or -1 if
* the element already exists. */
if ((index = _dictKeyIndex(ht, key)) == -1)
return DICT_ERR;
/* Allocates the memory and stores key */
entry = malloc(sizeof(*entry));
entry->next = ht->table[index];
ht->table[index] = entry;
/* Set the hash entry fields. */
dictSetHashKey(ht, entry, key);
dictSetHashVal(ht, entry, val);
ht->used++;
return DICT_OK;
}
/* Add an element, discarding the old if the key already exists.
* Return 1 if the key was added from scratch, 0 if there was already an
* element with such key and dictReplace() just performed a value update
* operation. */
static int dictReplace(dict *ht, void *key, void *val) {
dictEntry *entry, auxentry;
/* Try to add the element. If the key
* does not exists dictAdd will suceed. */
if (dictAdd(ht, key, val) == DICT_OK)
return 1;
/* It already exists, get the entry */
entry = dictFind(ht, key);
/* Free the old value and set the new one */
/* Set the new value and free the old one. Note that it is important
* to do that in this order, as the value may just be exactly the same
* as the previous one. In this context, think to reference counting,
* you want to increment (set), and then decrement (free), and not the
* reverse. */
auxentry = *entry;
dictSetHashVal(ht, entry, val);
dictFreeEntryVal(ht, &auxentry);
return 0;
}
/* Search and remove an element */
static int dictDelete(dict *ht, const void *key) {
unsigned int h;
dictEntry *de, *prevde;
if (ht->size == 0)
return DICT_ERR;
h = dictHashKey(ht, key) & ht->sizemask;
de = ht->table[h];
prevde = NULL;
while(de) {
if (dictCompareHashKeys(ht,key,de->key)) {
/* Unlink the element from the list */
if (prevde)
prevde->next = de->next;
else
ht->table[h] = de->next;
dictFreeEntryKey(ht,de);
dictFreeEntryVal(ht,de);
free(de);
ht->used--;
return DICT_OK;
}
prevde = de;
de = de->next;
}
return DICT_ERR; /* not found */
}
/* Destroy an entire hash table */
static int _dictClear(dict *ht) {
unsigned long i;
/* Free all the elements */
for (i = 0; i < ht->size && ht->used > 0; i++) {
dictEntry *he, *nextHe;
if ((he = ht->table[i]) == NULL) continue;
while(he) {
nextHe = he->next;
dictFreeEntryKey(ht, he);
dictFreeEntryVal(ht, he);
free(he);
ht->used--;
he = nextHe;
}
}
/* Free the table and the allocated cache structure */
free(ht->table);
/* Re-initialize the table */
_dictReset(ht);
return DICT_OK; /* never fails */
}
/* Clear & Release the hash table */
static void dictRelease(dict *ht) {
_dictClear(ht);
free(ht);
}
static dictEntry *dictFind(dict *ht, const void *key) {
dictEntry *he;
unsigned int h;
if (ht->size == 0) return NULL;
h = dictHashKey(ht, key) & ht->sizemask;
he = ht->table[h];
while(he) {
if (dictCompareHashKeys(ht, key, he->key))
return he;
he = he->next;
}
return NULL;
}
static dictIterator *dictGetIterator(dict *ht) {
dictIterator *iter = malloc(sizeof(*iter));
iter->ht = ht;
iter->index = -1;
iter->entry = NULL;
iter->nextEntry = NULL;
return iter;
}
static dictEntry *dictNext(dictIterator *iter) {
while (1) {
if (iter->entry == NULL) {
iter->index++;
if (iter->index >=
(signed)iter->ht->size) break;
iter->entry = iter->ht->table[iter->index];
} else {
iter->entry = iter->nextEntry;
}
if (iter->entry) {
/* We need to save the 'next' here, the iterator user
* may delete the entry we are returning. */
iter->nextEntry = iter->entry->next;
return iter->entry;
}
}
return NULL;
}
static void dictReleaseIterator(dictIterator *iter) {
free(iter);
}
/* ------------------------- private functions ------------------------------ */
/* Expand the hash table if needed */
static int _dictExpandIfNeeded(dict *ht) {
/* If the hash table is empty expand it to the intial size,
* if the table is "full" dobule its size. */
if (ht->size == 0)
return dictExpand(ht, DICT_HT_INITIAL_SIZE);
if (ht->used == ht->size)
return dictExpand(ht, ht->size*2);
return DICT_OK;
}
/* Our hash table capability is a power of two */
static unsigned long _dictNextPower(unsigned long size) {
unsigned long i = DICT_HT_INITIAL_SIZE;
if (size >= LONG_MAX) return LONG_MAX;
while(1) {
if (i >= size)
return i;
i *= 2;
}
}
/* Returns the index of a free slot that can be populated with
* an hash entry for the given 'key'.
* If the key already exists, -1 is returned. */
static int _dictKeyIndex(dict *ht, const void *key) {
unsigned int h;
dictEntry *he;
/* Expand the hashtable if needed */
if (_dictExpandIfNeeded(ht) == DICT_ERR)
return -1;
/* Compute the key hash value */
h = dictHashKey(ht, key) & ht->sizemask;
/* Search if this slot does not already contain the given key */
he = ht->table[h];
while(he) {
if (dictCompareHashKeys(ht, key, he->key))
return -1;
he = he->next;
}
return h;
}

View File

@ -1,126 +0,0 @@
/* Hash table implementation.
*
* This file implements in memory hash tables with insert/del/replace/find/
* get-random-element operations. Hash tables will auto resize if needed
* tables of power of two in size are used, collisions are handled by
* chaining. See the source code for more information... :)
*
* Copyright (c) 2006-2010, Salvatore Sanfilippo <antirez at gmail dot com>
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions are met:
*
* * Redistributions of source code must retain the above copyright notice,
* this list of conditions and the following disclaimer.
* * Redistributions in binary form must reproduce the above copyright
* notice, this list of conditions and the following disclaimer in the
* documentation and/or other materials provided with the distribution.
* * Neither the name of Redis nor the names of its contributors may be used
* to endorse or promote products derived from this software without
* specific prior written permission.
*
* THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
* AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
* IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
* ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE
* LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
* CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
* SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
* INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
* CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
* ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
* POSSIBILITY OF SUCH DAMAGE.
*/
#ifndef __DICT_H
#define __DICT_H
#define DICT_OK 0
#define DICT_ERR 1
/* Unused arguments generate annoying warnings... */
#define DICT_NOTUSED(V) ((void) V)
typedef struct dictEntry {
void *key;
void *val;
struct dictEntry *next;
} dictEntry;
typedef struct dictType {
unsigned int (*hashFunction)(const void *key);
void *(*keyDup)(void *privdata, const void *key);
void *(*valDup)(void *privdata, const void *obj);
int (*keyCompare)(void *privdata, const void *key1, const void *key2);
void (*keyDestructor)(void *privdata, void *key);
void (*valDestructor)(void *privdata, void *obj);
} dictType;
typedef struct dict {
dictEntry **table;
dictType *type;
unsigned long size;
unsigned long sizemask;
unsigned long used;
void *privdata;
} dict;
typedef struct dictIterator {
dict *ht;
int index;
dictEntry *entry, *nextEntry;
} dictIterator;
/* This is the initial size of every hash table */
#define DICT_HT_INITIAL_SIZE 4
/* ------------------------------- Macros ------------------------------------*/
#define dictFreeEntryVal(ht, entry) \
if ((ht)->type->valDestructor) \
(ht)->type->valDestructor((ht)->privdata, (entry)->val)
#define dictSetHashVal(ht, entry, _val_) do { \
if ((ht)->type->valDup) \
entry->val = (ht)->type->valDup((ht)->privdata, _val_); \
else \
entry->val = (_val_); \
} while(0)
#define dictFreeEntryKey(ht, entry) \
if ((ht)->type->keyDestructor) \
(ht)->type->keyDestructor((ht)->privdata, (entry)->key)
#define dictSetHashKey(ht, entry, _key_) do { \
if ((ht)->type->keyDup) \
entry->key = (ht)->type->keyDup((ht)->privdata, _key_); \
else \
entry->key = (_key_); \
} while(0)
#define dictCompareHashKeys(ht, key1, key2) \
(((ht)->type->keyCompare) ? \
(ht)->type->keyCompare((ht)->privdata, key1, key2) : \
(key1) == (key2))
#define dictHashKey(ht, key) (ht)->type->hashFunction(key)
#define dictGetEntryKey(he) ((he)->key)
#define dictGetEntryVal(he) ((he)->val)
#define dictSlots(ht) ((ht)->size)
#define dictSize(ht) ((ht)->used)
/* API */
static unsigned int dictGenHashFunction(const unsigned char *buf, int len);
static dict *dictCreate(dictType *type, void *privDataPtr);
static int dictExpand(dict *ht, unsigned long size);
static int dictAdd(dict *ht, void *key, void *val);
static int dictReplace(dict *ht, void *key, void *val);
static int dictDelete(dict *ht, const void *key);
static void dictRelease(dict *ht);
static dictEntry * dictFind(dict *ht, const void *key);
static dictIterator *dictGetIterator(dict *ht);
static dictEntry *dictNext(dictIterator *iter);
static void dictReleaseIterator(dictIterator *iter);
#endif /* __DICT_H */

View File

@ -1,62 +0,0 @@
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <signal.h>
#include <hiredis.h>
#include <async.h>
#include <adapters/ae.h>
/* Put event loop in the global scope, so it can be explicitly stopped */
static aeEventLoop *loop;
void getCallback(redisAsyncContext *c, void *r, void *privdata) {
redisReply *reply = r;
if (reply == NULL) return;
printf("argv[%s]: %s\n", (char*)privdata, reply->str);
/* Disconnect after receiving the reply to GET */
redisAsyncDisconnect(c);
}
void connectCallback(const redisAsyncContext *c, int status) {
if (status != REDIS_OK) {
printf("Error: %s\n", c->errstr);
aeStop(loop);
return;
}
printf("Connected...\n");
}
void disconnectCallback(const redisAsyncContext *c, int status) {
if (status != REDIS_OK) {
printf("Error: %s\n", c->errstr);
aeStop(loop);
return;
}
printf("Disconnected...\n");
aeStop(loop);
}
int main (int argc, char **argv) {
signal(SIGPIPE, SIG_IGN);
redisAsyncContext *c = redisAsyncConnect("127.0.0.1", 6379);
if (c->err) {
/* Let *c leak for now... */
printf("Error: %s\n", c->errstr);
return 1;
}
loop = aeCreateEventLoop(64);
redisAeAttach(loop, c);
redisAsyncSetConnectCallback(c,connectCallback);
redisAsyncSetDisconnectCallback(c,disconnectCallback);
redisAsyncCommand(c, NULL, NULL, "SET key %b", argv[argc-1], strlen(argv[argc-1]));
redisAsyncCommand(c, getCallback, (char*)"end-1", "GET key");
aeMain(loop);
return 0;
}

View File

@ -1,52 +0,0 @@
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <signal.h>
#include <hiredis.h>
#include <async.h>
#include <adapters/libev.h>
void getCallback(redisAsyncContext *c, void *r, void *privdata) {
redisReply *reply = r;
if (reply == NULL) return;
printf("argv[%s]: %s\n", (char*)privdata, reply->str);
/* Disconnect after receiving the reply to GET */
redisAsyncDisconnect(c);
}
void connectCallback(const redisAsyncContext *c, int status) {
if (status != REDIS_OK) {
printf("Error: %s\n", c->errstr);
return;
}
printf("Connected...\n");
}
void disconnectCallback(const redisAsyncContext *c, int status) {
if (status != REDIS_OK) {
printf("Error: %s\n", c->errstr);
return;
}
printf("Disconnected...\n");
}
int main (int argc, char **argv) {
signal(SIGPIPE, SIG_IGN);
redisAsyncContext *c = redisAsyncConnect("127.0.0.1", 6379);
if (c->err) {
/* Let *c leak for now... */
printf("Error: %s\n", c->errstr);
return 1;
}
redisLibevAttach(EV_DEFAULT_ c);
redisAsyncSetConnectCallback(c,connectCallback);
redisAsyncSetDisconnectCallback(c,disconnectCallback);
redisAsyncCommand(c, NULL, NULL, "SET key %b", argv[argc-1], strlen(argv[argc-1]));
redisAsyncCommand(c, getCallback, (char*)"end-1", "GET key");
ev_loop(EV_DEFAULT_ 0);
return 0;
}

View File

@ -1,53 +0,0 @@
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <signal.h>
#include <hiredis.h>
#include <async.h>
#include <adapters/libevent.h>
void getCallback(redisAsyncContext *c, void *r, void *privdata) {
redisReply *reply = r;
if (reply == NULL) return;
printf("argv[%s]: %s\n", (char*)privdata, reply->str);
/* Disconnect after receiving the reply to GET */
redisAsyncDisconnect(c);
}
void connectCallback(const redisAsyncContext *c, int status) {
if (status != REDIS_OK) {
printf("Error: %s\n", c->errstr);
return;
}
printf("Connected...\n");
}
void disconnectCallback(const redisAsyncContext *c, int status) {
if (status != REDIS_OK) {
printf("Error: %s\n", c->errstr);
return;
}
printf("Disconnected...\n");
}
int main (int argc, char **argv) {
signal(SIGPIPE, SIG_IGN);
struct event_base *base = event_base_new();
redisAsyncContext *c = redisAsyncConnect("127.0.0.1", 6379);
if (c->err) {
/* Let *c leak for now... */
printf("Error: %s\n", c->errstr);
return 1;
}
redisLibeventAttach(c,base);
redisAsyncSetConnectCallback(c,connectCallback);
redisAsyncSetDisconnectCallback(c,disconnectCallback);
redisAsyncCommand(c, NULL, NULL, "SET key %b", argv[argc-1], strlen(argv[argc-1]));
redisAsyncCommand(c, getCallback, (char*)"end-1", "GET key");
event_base_dispatch(base);
return 0;
}

View File

@ -1,53 +0,0 @@
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <signal.h>
#include <hiredis.h>
#include <async.h>
#include <adapters/libuv.h>
void getCallback(redisAsyncContext *c, void *r, void *privdata) {
redisReply *reply = r;
if (reply == NULL) return;
printf("argv[%s]: %s\n", (char*)privdata, reply->str);
/* Disconnect after receiving the reply to GET */
redisAsyncDisconnect(c);
}
void connectCallback(const redisAsyncContext *c, int status) {
if (status != REDIS_OK) {
printf("Error: %s\n", c->errstr);
return;
}
printf("Connected...\n");
}
void disconnectCallback(const redisAsyncContext *c, int status) {
if (status != REDIS_OK) {
printf("Error: %s\n", c->errstr);
return;
}
printf("Disconnected...\n");
}
int main (int argc, char **argv) {
signal(SIGPIPE, SIG_IGN);
uv_loop_t* loop = uv_default_loop();
redisAsyncContext *c = redisAsyncConnect("127.0.0.1", 6379);
if (c->err) {
/* Let *c leak for now... */
printf("Error: %s\n", c->errstr);
return 1;
}
redisLibuvAttach(c,loop);
redisAsyncSetConnectCallback(c,connectCallback);
redisAsyncSetDisconnectCallback(c,disconnectCallback);
redisAsyncCommand(c, NULL, NULL, "SET key %b", argv[argc-1], strlen(argv[argc-1]));
redisAsyncCommand(c, getCallback, (char*)"end-1", "GET key");
uv_run(loop, UV_RUN_DEFAULT);
return 0;
}

View File

@ -1,78 +0,0 @@
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <hiredis.h>
int main(int argc, char **argv) {
unsigned int j;
redisContext *c;
redisReply *reply;
const char *hostname = (argc > 1) ? argv[1] : "127.0.0.1";
int port = (argc > 2) ? atoi(argv[2]) : 6379;
struct timeval timeout = { 1, 500000 }; // 1.5 seconds
c = redisConnectWithTimeout(hostname, port, timeout);
if (c == NULL || c->err) {
if (c) {
printf("Connection error: %s\n", c->errstr);
redisFree(c);
} else {
printf("Connection error: can't allocate redis context\n");
}
exit(1);
}
/* PING server */
reply = redisCommand(c,"PING");
printf("PING: %s\n", reply->str);
freeReplyObject(reply);
/* Set a key */
reply = redisCommand(c,"SET %s %s", "foo", "hello world");
printf("SET: %s\n", reply->str);
freeReplyObject(reply);
/* Set a key using binary safe API */
reply = redisCommand(c,"SET %b %b", "bar", (size_t) 3, "hello", (size_t) 5);
printf("SET (binary API): %s\n", reply->str);
freeReplyObject(reply);
/* Try a GET and two INCR */
reply = redisCommand(c,"GET foo");
printf("GET foo: %s\n", reply->str);
freeReplyObject(reply);
reply = redisCommand(c,"INCR counter");
printf("INCR counter: %lld\n", reply->integer);
freeReplyObject(reply);
/* again ... */
reply = redisCommand(c,"INCR counter");
printf("INCR counter: %lld\n", reply->integer);
freeReplyObject(reply);
/* Create a list of numbers, from 0 to 9 */
reply = redisCommand(c,"DEL mylist");
freeReplyObject(reply);
for (j = 0; j < 10; j++) {
char buf[64];
snprintf(buf,64,"%d",j);
reply = redisCommand(c,"LPUSH mylist element-%s", buf);
freeReplyObject(reply);
}
/* Let's check what we have inside the list */
reply = redisCommand(c,"LRANGE mylist 0 -1");
if (reply->type == REDIS_REPLY_ARRAY) {
for (j = 0; j < reply->elements; j++) {
printf("%u) %s\n", j, reply->element[j]->str);
}
}
freeReplyObject(reply);
/* Disconnects and frees the context */
redisFree(c);
return 0;
}

View File

@ -1,24 +0,0 @@
#ifndef __HIREDIS_FMACRO_H
#define __HIREDIS_FMACRO_H
#if !defined(_BSD_SOURCE)
#define _BSD_SOURCE
#endif
#if defined(_AIX)
#define _ALL_SOURCE
#endif
#if defined(__sun__)
#define _POSIX_C_SOURCE 200112L
#elif defined(__linux__) || defined(__OpenBSD__) || defined(__NetBSD__)
#define _XOPEN_SOURCE 600
#else
#define _XOPEN_SOURCE
#endif
#if __APPLE__ && __MACH__
#define _OSX
#endif
#endif

File diff suppressed because it is too large Load Diff

View File

@ -1,220 +0,0 @@
/*
* Copyright (c) 2009-2011, Salvatore Sanfilippo <antirez at gmail dot com>
* Copyright (c) 2010-2011, Pieter Noordhuis <pcnoordhuis at gmail dot com>
*
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions are met:
*
* * Redistributions of source code must retain the above copyright notice,
* this list of conditions and the following disclaimer.
* * Redistributions in binary form must reproduce the above copyright
* notice, this list of conditions and the following disclaimer in the
* documentation and/or other materials provided with the distribution.
* * Neither the name of Redis nor the names of its contributors may be used
* to endorse or promote products derived from this software without
* specific prior written permission.
*
* THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
* AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
* IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
* ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE
* LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
* CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
* SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
* INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
* CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
* ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
* POSSIBILITY OF SUCH DAMAGE.
*/
#ifndef __HIREDIS_H
#define __HIREDIS_H
#include <stdio.h> /* for size_t */
#include <stdarg.h> /* for va_list */
#include <sys/time.h> /* for struct timeval */
#define HIREDIS_MAJOR 0
#define HIREDIS_MINOR 11
#define HIREDIS_PATCH 0
#define REDIS_ERR -1
#define REDIS_OK 0
/* When an error occurs, the err flag in a context is set to hold the type of
* error that occured. REDIS_ERR_IO means there was an I/O error and you
* should use the "errno" variable to find out what is wrong.
* For other values, the "errstr" field will hold a description. */
#define REDIS_ERR_IO 1 /* Error in read or write */
#define REDIS_ERR_EOF 3 /* End of file */
#define REDIS_ERR_PROTOCOL 4 /* Protocol error */
#define REDIS_ERR_OOM 5 /* Out of memory */
#define REDIS_ERR_OTHER 2 /* Everything else... */
/* Connection type can be blocking or non-blocking and is set in the
* least significant bit of the flags field in redisContext. */
#define REDIS_BLOCK 0x1
/* Connection may be disconnected before being free'd. The second bit
* in the flags field is set when the context is connected. */
#define REDIS_CONNECTED 0x2
/* The async API might try to disconnect cleanly and flush the output
* buffer and read all subsequent replies before disconnecting.
* This flag means no new commands can come in and the connection
* should be terminated once all replies have been read. */
#define REDIS_DISCONNECTING 0x4
/* Flag specific to the async API which means that the context should be clean
* up as soon as possible. */
#define REDIS_FREEING 0x8
/* Flag that is set when an async callback is executed. */
#define REDIS_IN_CALLBACK 0x10
/* Flag that is set when the async context has one or more subscriptions. */
#define REDIS_SUBSCRIBED 0x20
/* Flag that is set when monitor mode is active */
#define REDIS_MONITORING 0x40
#define REDIS_REPLY_STRING 1
#define REDIS_REPLY_ARRAY 2
#define REDIS_REPLY_INTEGER 3
#define REDIS_REPLY_NIL 4
#define REDIS_REPLY_STATUS 5
#define REDIS_REPLY_ERROR 6
#define REDIS_READER_MAX_BUF (1024*16) /* Default max unused reader buffer. */
#define REDIS_KEEPALIVE_INTERVAL 15 /* seconds */
#ifdef __cplusplus
extern "C" {
#endif
/* This is the reply object returned by redisCommand() */
typedef struct redisReply {
int type; /* REDIS_REPLY_* */
long long integer; /* The integer when type is REDIS_REPLY_INTEGER */
int len; /* Length of string */
char *str; /* Used for both REDIS_REPLY_ERROR and REDIS_REPLY_STRING */
size_t elements; /* number of elements, for REDIS_REPLY_ARRAY */
struct redisReply **element; /* elements vector for REDIS_REPLY_ARRAY */
} redisReply;
typedef struct redisReadTask {
int type;
int elements; /* number of elements in multibulk container */
int idx; /* index in parent (array) object */
void *obj; /* holds user-generated value for a read task */
struct redisReadTask *parent; /* parent task */
void *privdata; /* user-settable arbitrary field */
} redisReadTask;
typedef struct redisReplyObjectFunctions {
void *(*createString)(const redisReadTask*, char*, size_t);
void *(*createArray)(const redisReadTask*, int);
void *(*createInteger)(const redisReadTask*, long long);
void *(*createNil)(const redisReadTask*);
void (*freeObject)(void*);
} redisReplyObjectFunctions;
/* State for the protocol parser */
typedef struct redisReader {
int err; /* Error flags, 0 when there is no error */
char errstr[128]; /* String representation of error when applicable */
char *buf; /* Read buffer */
size_t pos; /* Buffer cursor */
size_t len; /* Buffer length */
size_t maxbuf; /* Max length of unused buffer */
redisReadTask rstack[9];
int ridx; /* Index of current read task */
void *reply; /* Temporary reply pointer */
redisReplyObjectFunctions *fn;
void *privdata;
} redisReader;
/* Public API for the protocol parser. */
redisReader *redisReaderCreate(void);
void redisReaderFree(redisReader *r);
int redisReaderFeed(redisReader *r, const char *buf, size_t len);
int redisReaderGetReply(redisReader *r, void **reply);
/* Backwards compatibility, can be removed on big version bump. */
#define redisReplyReaderCreate redisReaderCreate
#define redisReplyReaderFree redisReaderFree
#define redisReplyReaderFeed redisReaderFeed
#define redisReplyReaderGetReply redisReaderGetReply
#define redisReplyReaderSetPrivdata(_r, _p) (int)(((redisReader*)(_r))->privdata = (_p))
#define redisReplyReaderGetObject(_r) (((redisReader*)(_r))->reply)
#define redisReplyReaderGetError(_r) (((redisReader*)(_r))->errstr)
/* Function to free the reply objects hiredis returns by default. */
void freeReplyObject(void *reply);
/* Functions to format a command according to the protocol. */
int redisvFormatCommand(char **target, const char *format, va_list ap);
int redisFormatCommand(char **target, const char *format, ...);
int redisFormatCommandArgv(char **target, int argc, const char **argv, const size_t *argvlen);
/* Context for a connection to Redis */
typedef struct redisContext {
int err; /* Error flags, 0 when there is no error */
char errstr[128]; /* String representation of error when applicable */
int fd;
int flags;
char *obuf; /* Write buffer */
redisReader *reader; /* Protocol reader */
} redisContext;
redisContext *redisConnect(const char *ip, int port);
redisContext *redisConnectWithTimeout(const char *ip, int port, const struct timeval tv);
redisContext *redisConnectNonBlock(const char *ip, int port);
redisContext *redisConnectBindNonBlock(const char *ip, int port, const char *source_addr);
redisContext *redisConnectUnix(const char *path);
redisContext *redisConnectUnixWithTimeout(const char *path, const struct timeval tv);
redisContext *redisConnectUnixNonBlock(const char *path);
redisContext *redisConnectFd(int fd);
int redisSetTimeout(redisContext *c, const struct timeval tv);
int redisEnableKeepAlive(redisContext *c);
void redisFree(redisContext *c);
int redisFreeKeepFd(redisContext *c);
int redisBufferRead(redisContext *c);
int redisBufferWrite(redisContext *c, int *done);
/* In a blocking context, this function first checks if there are unconsumed
* replies to return and returns one if so. Otherwise, it flushes the output
* buffer to the socket and reads until it has a reply. In a non-blocking
* context, it will return unconsumed replies until there are no more. */
int redisGetReply(redisContext *c, void **reply);
int redisGetReplyFromReader(redisContext *c, void **reply);
/* Write a formatted command to the output buffer. Use these functions in blocking mode
* to get a pipeline of commands. */
int redisAppendFormattedCommand(redisContext *c, const char *cmd, size_t len);
/* Write a command to the output buffer. Use these functions in blocking mode
* to get a pipeline of commands. */
int redisvAppendCommand(redisContext *c, const char *format, va_list ap);
int redisAppendCommand(redisContext *c, const char *format, ...);
int redisAppendCommandArgv(redisContext *c, int argc, const char **argv, const size_t *argvlen);
/* Issue a command to Redis. In a blocking context, it is identical to calling
* redisAppendCommand, followed by redisGetReply. The function will return
* NULL if there was an error in performing the request, otherwise it will
* return the reply. In a non-blocking context, it is identical to calling
* only redisAppendCommand and will always return NULL. */
void *redisvCommand(redisContext *c, const char *format, va_list ap);
void *redisCommand(redisContext *c, const char *format, ...);
void *redisCommandArgv(redisContext *c, int argc, const char **argv, const size_t *argvlen);
#ifdef __cplusplus
}
#endif
#endif

View File

@ -1,382 +0,0 @@
/* Extracted from anet.c to work properly with Hiredis error reporting.
*
* Copyright (c) 2006-2011, Salvatore Sanfilippo <antirez at gmail dot com>
* Copyright (c) 2010-2011, Pieter Noordhuis <pcnoordhuis at gmail dot com>
*
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions are met:
*
* * Redistributions of source code must retain the above copyright notice,
* this list of conditions and the following disclaimer.
* * Redistributions in binary form must reproduce the above copyright
* notice, this list of conditions and the following disclaimer in the
* documentation and/or other materials provided with the distribution.
* * Neither the name of Redis nor the names of its contributors may be used
* to endorse or promote products derived from this software without
* specific prior written permission.
*
* THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
* AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
* IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
* ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE
* LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
* CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
* SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
* INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
* CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
* ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
* POSSIBILITY OF SUCH DAMAGE.
*/
#include "fmacros.h"
#include <sys/types.h>
#include <sys/socket.h>
#include <sys/select.h>
#include <sys/un.h>
#include <netinet/in.h>
#include <netinet/tcp.h>
#include <arpa/inet.h>
#include <unistd.h>
#include <fcntl.h>
#include <string.h>
#include <netdb.h>
#include <errno.h>
#include <stdarg.h>
#include <stdio.h>
#include <poll.h>
#include <limits.h>
#include "net.h"
#include "sds.h"
/* Defined in hiredis.c */
void __redisSetError(redisContext *c, int type, const char *str);
static void redisContextCloseFd(redisContext *c) {
if (c && c->fd >= 0) {
close(c->fd);
c->fd = -1;
}
}
static void __redisSetErrorFromErrno(redisContext *c, int type, const char *prefix) {
char buf[128] = { 0 };
size_t len = 0;
if (prefix != NULL)
len = snprintf(buf,sizeof(buf),"%s: ",prefix);
strerror_r(errno,buf+len,sizeof(buf)-len);
__redisSetError(c,type,buf);
}
static int redisSetReuseAddr(redisContext *c) {
int on = 1;
if (setsockopt(c->fd, SOL_SOCKET, SO_REUSEADDR, &on, sizeof(on)) == -1) {
__redisSetErrorFromErrno(c,REDIS_ERR_IO,NULL);
redisContextCloseFd(c);
return REDIS_ERR;
}
return REDIS_OK;
}
static int redisCreateSocket(redisContext *c, int type) {
int s;
if ((s = socket(type, SOCK_STREAM, 0)) == -1) {
__redisSetErrorFromErrno(c,REDIS_ERR_IO,NULL);
return REDIS_ERR;
}
c->fd = s;
if (type == AF_INET) {
if (redisSetReuseAddr(c) == REDIS_ERR) {
return REDIS_ERR;
}
}
return REDIS_OK;
}
static int redisSetBlocking(redisContext *c, int blocking) {
int flags;
/* Set the socket nonblocking.
* Note that fcntl(2) for F_GETFL and F_SETFL can't be
* interrupted by a signal. */
if ((flags = fcntl(c->fd, F_GETFL)) == -1) {
__redisSetErrorFromErrno(c,REDIS_ERR_IO,"fcntl(F_GETFL)");
redisContextCloseFd(c);
return REDIS_ERR;
}
if (blocking)
flags &= ~O_NONBLOCK;
else
flags |= O_NONBLOCK;
if (fcntl(c->fd, F_SETFL, flags) == -1) {
__redisSetErrorFromErrno(c,REDIS_ERR_IO,"fcntl(F_SETFL)");
redisContextCloseFd(c);
return REDIS_ERR;
}
return REDIS_OK;
}
int redisKeepAlive(redisContext *c, int interval) {
int val = 1;
int fd = c->fd;
if (setsockopt(fd, SOL_SOCKET, SO_KEEPALIVE, &val, sizeof(val)) == -1){
__redisSetError(c,REDIS_ERR_OTHER,strerror(errno));
return REDIS_ERR;
}
val = interval;
#ifdef _OSX
if (setsockopt(fd, IPPROTO_TCP, TCP_KEEPALIVE, &val, sizeof(val)) < 0) {
__redisSetError(c,REDIS_ERR_OTHER,strerror(errno));
return REDIS_ERR;
}
#else
#ifndef __sun
val = interval;
if (setsockopt(fd, IPPROTO_TCP, TCP_KEEPIDLE, &val, sizeof(val)) < 0) {
__redisSetError(c,REDIS_ERR_OTHER,strerror(errno));
return REDIS_ERR;
}
val = interval/3;
if (val == 0) val = 1;
if (setsockopt(fd, IPPROTO_TCP, TCP_KEEPINTVL, &val, sizeof(val)) < 0) {
__redisSetError(c,REDIS_ERR_OTHER,strerror(errno));
return REDIS_ERR;
}
val = 3;
if (setsockopt(fd, IPPROTO_TCP, TCP_KEEPCNT, &val, sizeof(val)) < 0) {
__redisSetError(c,REDIS_ERR_OTHER,strerror(errno));
return REDIS_ERR;
}
#endif
#endif
return REDIS_OK;
}
static int redisSetTcpNoDelay(redisContext *c) {
int yes = 1;
if (setsockopt(c->fd, IPPROTO_TCP, TCP_NODELAY, &yes, sizeof(yes)) == -1) {
__redisSetErrorFromErrno(c,REDIS_ERR_IO,"setsockopt(TCP_NODELAY)");
redisContextCloseFd(c);
return REDIS_ERR;
}
return REDIS_OK;
}
#define __MAX_MSEC (((LONG_MAX) - 999) / 1000)
static int redisContextWaitReady(redisContext *c, const struct timeval *timeout) {
struct pollfd wfd[1];
long msec;
msec = -1;
wfd[0].fd = c->fd;
wfd[0].events = POLLOUT;
/* Only use timeout when not NULL. */
if (timeout != NULL) {
if (timeout->tv_usec > 1000000 || timeout->tv_sec > __MAX_MSEC) {
__redisSetErrorFromErrno(c, REDIS_ERR_IO, NULL);
redisContextCloseFd(c);
return REDIS_ERR;
}
msec = (timeout->tv_sec * 1000) + ((timeout->tv_usec + 999) / 1000);
if (msec < 0 || msec > INT_MAX) {
msec = INT_MAX;
}
}
if (errno == EINPROGRESS) {
int res;
if ((res = poll(wfd, 1, msec)) == -1) {
__redisSetErrorFromErrno(c, REDIS_ERR_IO, "poll(2)");
redisContextCloseFd(c);
return REDIS_ERR;
} else if (res == 0) {
errno = ETIMEDOUT;
__redisSetErrorFromErrno(c,REDIS_ERR_IO,NULL);
redisContextCloseFd(c);
return REDIS_ERR;
}
if (redisCheckSocketError(c) != REDIS_OK)
return REDIS_ERR;
return REDIS_OK;
}
__redisSetErrorFromErrno(c,REDIS_ERR_IO,NULL);
redisContextCloseFd(c);
return REDIS_ERR;
}
int redisCheckSocketError(redisContext *c) {
int err = 0;
socklen_t errlen = sizeof(err);
if (getsockopt(c->fd, SOL_SOCKET, SO_ERROR, &err, &errlen) == -1) {
__redisSetErrorFromErrno(c,REDIS_ERR_IO,"getsockopt(SO_ERROR)");
return REDIS_ERR;
}
if (err) {
errno = err;
__redisSetErrorFromErrno(c,REDIS_ERR_IO,NULL);
return REDIS_ERR;
}
return REDIS_OK;
}
int redisContextSetTimeout(redisContext *c, const struct timeval tv) {
if (setsockopt(c->fd,SOL_SOCKET,SO_RCVTIMEO,&tv,sizeof(tv)) == -1) {
__redisSetErrorFromErrno(c,REDIS_ERR_IO,"setsockopt(SO_RCVTIMEO)");
return REDIS_ERR;
}
if (setsockopt(c->fd,SOL_SOCKET,SO_SNDTIMEO,&tv,sizeof(tv)) == -1) {
__redisSetErrorFromErrno(c,REDIS_ERR_IO,"setsockopt(SO_SNDTIMEO)");
return REDIS_ERR;
}
return REDIS_OK;
}
static int _redisContextConnectTcp(redisContext *c, const char *addr, int port,
const struct timeval *timeout,
const char *source_addr) {
int s, rv;
char _port[6]; /* strlen("65535"); */
struct addrinfo hints, *servinfo, *bservinfo, *p, *b;
int blocking = (c->flags & REDIS_BLOCK);
snprintf(_port, 6, "%d", port);
memset(&hints,0,sizeof(hints));
hints.ai_family = AF_INET;
hints.ai_socktype = SOCK_STREAM;
/* Try with IPv6 if no IPv4 address was found. We do it in this order since
* in a Redis client you can't afford to test if you have IPv6 connectivity
* as this would add latency to every connect. Otherwise a more sensible
* route could be: Use IPv6 if both addresses are available and there is IPv6
* connectivity. */
if ((rv = getaddrinfo(addr,_port,&hints,&servinfo)) != 0) {
hints.ai_family = AF_INET6;
if ((rv = getaddrinfo(addr,_port,&hints,&servinfo)) != 0) {
__redisSetError(c,REDIS_ERR_OTHER,gai_strerror(rv));
return REDIS_ERR;
}
}
for (p = servinfo; p != NULL; p = p->ai_next) {
if ((s = socket(p->ai_family,p->ai_socktype,p->ai_protocol)) == -1)
continue;
c->fd = s;
if (redisSetBlocking(c,0) != REDIS_OK)
goto error;
if (source_addr) {
int bound = 0;
/* Using getaddrinfo saves us from self-determining IPv4 vs IPv6 */
if ((rv = getaddrinfo(source_addr, NULL, &hints, &bservinfo)) != 0) {
char buf[128];
snprintf(buf,sizeof(buf),"Can't get addr: %s",gai_strerror(rv));
__redisSetError(c,REDIS_ERR_OTHER,buf);
goto error;
}
for (b = bservinfo; b != NULL; b = b->ai_next) {
if (bind(s,b->ai_addr,b->ai_addrlen) != -1) {
bound = 1;
break;
}
}
freeaddrinfo(bservinfo);
if (!bound) {
char buf[128];
snprintf(buf,sizeof(buf),"Can't bind socket: %s",strerror(errno));
__redisSetError(c,REDIS_ERR_OTHER,buf);
goto error;
}
}
if (connect(s,p->ai_addr,p->ai_addrlen) == -1) {
if (errno == EHOSTUNREACH) {
redisContextCloseFd(c);
continue;
} else if (errno == EINPROGRESS && !blocking) {
/* This is ok. */
} else {
if (redisContextWaitReady(c,timeout) != REDIS_OK)
goto error;
}
}
if (blocking && redisSetBlocking(c,1) != REDIS_OK)
goto error;
if (redisSetTcpNoDelay(c) != REDIS_OK)
goto error;
c->flags |= REDIS_CONNECTED;
rv = REDIS_OK;
goto end;
}
if (p == NULL) {
char buf[128];
snprintf(buf,sizeof(buf),"Can't create socket: %s",strerror(errno));
__redisSetError(c,REDIS_ERR_OTHER,buf);
goto error;
}
error:
rv = REDIS_ERR;
end:
freeaddrinfo(servinfo);
return rv; // Need to return REDIS_OK if alright
}
int redisContextConnectTcp(redisContext *c, const char *addr, int port,
const struct timeval *timeout) {
return _redisContextConnectTcp(c, addr, port, timeout, NULL);
}
int redisContextConnectBindTcp(redisContext *c, const char *addr, int port,
const struct timeval *timeout,
const char *source_addr) {
return _redisContextConnectTcp(c, addr, port, timeout, source_addr);
}
int redisContextConnectUnix(redisContext *c, const char *path, const struct timeval *timeout) {
int blocking = (c->flags & REDIS_BLOCK);
struct sockaddr_un sa;
if (redisCreateSocket(c,AF_LOCAL) < 0)
return REDIS_ERR;
if (redisSetBlocking(c,0) != REDIS_OK)
return REDIS_ERR;
sa.sun_family = AF_LOCAL;
strncpy(sa.sun_path,path,sizeof(sa.sun_path)-1);
if (connect(c->fd, (struct sockaddr*)&sa, sizeof(sa)) == -1) {
if (errno == EINPROGRESS && !blocking) {
/* This is ok. */
} else {
if (redisContextWaitReady(c,timeout) != REDIS_OK)
return REDIS_ERR;
}
}
/* Reset socket to be blocking after connect(2). */
if (blocking && redisSetBlocking(c,1) != REDIS_OK)
return REDIS_ERR;
c->flags |= REDIS_CONNECTED;
return REDIS_OK;
}

View File

@ -1,51 +0,0 @@
/* Extracted from anet.c to work properly with Hiredis error reporting.
*
* Copyright (c) 2006-2011, Salvatore Sanfilippo <antirez at gmail dot com>
* Copyright (c) 2010-2011, Pieter Noordhuis <pcnoordhuis at gmail dot com>
*
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions are met:
*
* * Redistributions of source code must retain the above copyright notice,
* this list of conditions and the following disclaimer.
* * Redistributions in binary form must reproduce the above copyright
* notice, this list of conditions and the following disclaimer in the
* documentation and/or other materials provided with the distribution.
* * Neither the name of Redis nor the names of its contributors may be used
* to endorse or promote products derived from this software without
* specific prior written permission.
*
* THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
* AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
* IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
* ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE
* LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
* CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
* SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
* INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
* CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
* ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
* POSSIBILITY OF SUCH DAMAGE.
*/
#ifndef __NET_H
#define __NET_H
#include "hiredis.h"
#if defined(__sun) || defined(_AIX)
#define AF_LOCAL AF_UNIX
#endif
int redisCheckSocketError(redisContext *c);
int redisContextSetTimeout(redisContext *c, const struct timeval tv);
int redisContextConnectTcp(redisContext *c, const char *addr, int port, const struct timeval *timeout);
int redisContextConnectBindTcp(redisContext *c, const char *addr, int port,
const struct timeval *timeout,
const char *source_addr);
int redisContextConnectUnix(redisContext *c, const char *path, const struct timeval *timeout);
int redisKeepAlive(redisContext *c, int interval);
#endif

File diff suppressed because it is too large Load Diff

View File

@ -1,273 +0,0 @@
/* SDSLib 2.0 -- A C dynamic strings library
*
* Copyright (c) 2006-2015, Salvatore Sanfilippo <antirez at gmail dot com>
* Copyright (c) 2015, Oran Agra
* Copyright (c) 2015, Redis Labs, Inc
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions are met:
*
* * Redistributions of source code must retain the above copyright notice,
* this list of conditions and the following disclaimer.
* * Redistributions in binary form must reproduce the above copyright
* notice, this list of conditions and the following disclaimer in the
* documentation and/or other materials provided with the distribution.
* * Neither the name of Redis nor the names of its contributors may be used
* to endorse or promote products derived from this software without
* specific prior written permission.
*
* THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
* AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
* IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
* ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE
* LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
* CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
* SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
* INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
* CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
* ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
* POSSIBILITY OF SUCH DAMAGE.
*/
#ifndef __SDS_H
#define __SDS_H
#define SDS_MAX_PREALLOC (1024*1024)
#include <sys/types.h>
#include <stdarg.h>
#include <stdint.h>
typedef char *sds;
/* Note: sdshdr5 is never used, we just access the flags byte directly.
* However is here to document the layout of type 5 SDS strings. */
struct __attribute__ ((__packed__)) sdshdr5 {
unsigned char flags; /* 3 lsb of type, and 5 msb of string length */
char buf[];
};
struct __attribute__ ((__packed__)) sdshdr8 {
uint8_t len; /* used */
uint8_t alloc; /* excluding the header and null terminator */
unsigned char flags; /* 3 lsb of type, 5 unused bits */
char buf[];
};
struct __attribute__ ((__packed__)) sdshdr16 {
uint16_t len; /* used */
uint16_t alloc; /* excluding the header and null terminator */
unsigned char flags; /* 3 lsb of type, 5 unused bits */
char buf[];
};
struct __attribute__ ((__packed__)) sdshdr32 {
uint32_t len; /* used */
uint32_t alloc; /* excluding the header and null terminator */
unsigned char flags; /* 3 lsb of type, 5 unused bits */
char buf[];
};
struct __attribute__ ((__packed__)) sdshdr64 {
uint64_t len; /* used */
uint64_t alloc; /* excluding the header and null terminator */
unsigned char flags; /* 3 lsb of type, 5 unused bits */
char buf[];
};
#define SDS_TYPE_5 0
#define SDS_TYPE_8 1
#define SDS_TYPE_16 2
#define SDS_TYPE_32 3
#define SDS_TYPE_64 4
#define SDS_TYPE_MASK 7
#define SDS_TYPE_BITS 3
#define SDS_HDR_VAR(T,s) struct sdshdr##T *sh = (void*)((s)-(sizeof(struct sdshdr##T)));
#define SDS_HDR(T,s) ((struct sdshdr##T *)((s)-(sizeof(struct sdshdr##T))))
#define SDS_TYPE_5_LEN(f) ((f)>>SDS_TYPE_BITS)
static inline size_t sdslen(const sds s) {
unsigned char flags = s[-1];
switch(flags&SDS_TYPE_MASK) {
case SDS_TYPE_5:
return SDS_TYPE_5_LEN(flags);
case SDS_TYPE_8:
return SDS_HDR(8,s)->len;
case SDS_TYPE_16:
return SDS_HDR(16,s)->len;
case SDS_TYPE_32:
return SDS_HDR(32,s)->len;
case SDS_TYPE_64:
return SDS_HDR(64,s)->len;
}
return 0;
}
static inline size_t sdsavail(const sds s) {
unsigned char flags = s[-1];
switch(flags&SDS_TYPE_MASK) {
case SDS_TYPE_5: {
return 0;
}
case SDS_TYPE_8: {
SDS_HDR_VAR(8,s);
return sh->alloc - sh->len;
}
case SDS_TYPE_16: {
SDS_HDR_VAR(16,s);
return sh->alloc - sh->len;
}
case SDS_TYPE_32: {
SDS_HDR_VAR(32,s);
return sh->alloc - sh->len;
}
case SDS_TYPE_64: {
SDS_HDR_VAR(64,s);
return sh->alloc - sh->len;
}
}
return 0;
}
static inline void sdssetlen(sds s, size_t newlen) {
unsigned char flags = s[-1];
switch(flags&SDS_TYPE_MASK) {
case SDS_TYPE_5:
{
unsigned char *fp = ((unsigned char*)s)-1;
*fp = SDS_TYPE_5 | (newlen << SDS_TYPE_BITS);
}
break;
case SDS_TYPE_8:
SDS_HDR(8,s)->len = newlen;
break;
case SDS_TYPE_16:
SDS_HDR(16,s)->len = newlen;
break;
case SDS_TYPE_32:
SDS_HDR(32,s)->len = newlen;
break;
case SDS_TYPE_64:
SDS_HDR(64,s)->len = newlen;
break;
}
}
static inline void sdsinclen(sds s, size_t inc) {
unsigned char flags = s[-1];
switch(flags&SDS_TYPE_MASK) {
case SDS_TYPE_5:
{
unsigned char *fp = ((unsigned char*)s)-1;
unsigned char newlen = SDS_TYPE_5_LEN(flags)+inc;
*fp = SDS_TYPE_5 | (newlen << SDS_TYPE_BITS);
}
break;
case SDS_TYPE_8:
SDS_HDR(8,s)->len += inc;
break;
case SDS_TYPE_16:
SDS_HDR(16,s)->len += inc;
break;
case SDS_TYPE_32:
SDS_HDR(32,s)->len += inc;
break;
case SDS_TYPE_64:
SDS_HDR(64,s)->len += inc;
break;
}
}
/* sdsalloc() = sdsavail() + sdslen() */
static inline size_t sdsalloc(const sds s) {
unsigned char flags = s[-1];
switch(flags&SDS_TYPE_MASK) {
case SDS_TYPE_5:
return SDS_TYPE_5_LEN(flags);
case SDS_TYPE_8:
return SDS_HDR(8,s)->alloc;
case SDS_TYPE_16:
return SDS_HDR(16,s)->alloc;
case SDS_TYPE_32:
return SDS_HDR(32,s)->alloc;
case SDS_TYPE_64:
return SDS_HDR(64,s)->alloc;
}
return 0;
}
static inline void sdssetalloc(sds s, size_t newlen) {
unsigned char flags = s[-1];
switch(flags&SDS_TYPE_MASK) {
case SDS_TYPE_5:
/* Nothing to do, this type has no total allocation info. */
break;
case SDS_TYPE_8:
SDS_HDR(8,s)->alloc = newlen;
break;
case SDS_TYPE_16:
SDS_HDR(16,s)->alloc = newlen;
break;
case SDS_TYPE_32:
SDS_HDR(32,s)->alloc = newlen;
break;
case SDS_TYPE_64:
SDS_HDR(64,s)->alloc = newlen;
break;
}
}
sds sdsnewlen(const void *init, size_t initlen);
sds sdsnew(const char *init);
sds sdsempty(void);
sds sdsdup(const sds s);
void sdsfree(sds s);
sds sdsgrowzero(sds s, size_t len);
sds sdscatlen(sds s, const void *t, size_t len);
sds sdscat(sds s, const char *t);
sds sdscatsds(sds s, const sds t);
sds sdscpylen(sds s, const char *t, size_t len);
sds sdscpy(sds s, const char *t);
sds sdscatvprintf(sds s, const char *fmt, va_list ap);
#ifdef __GNUC__
sds sdscatprintf(sds s, const char *fmt, ...)
__attribute__((format(printf, 2, 3)));
#else
sds sdscatprintf(sds s, const char *fmt, ...);
#endif
sds sdscatfmt(sds s, char const *fmt, ...);
sds sdstrim(sds s, const char *cset);
void sdsrange(sds s, int start, int end);
void sdsupdatelen(sds s);
void sdsclear(sds s);
int sdscmp(const sds s1, const sds s2);
sds *sdssplitlen(const char *s, int len, const char *sep, int seplen, int *count);
void sdsfreesplitres(sds *tokens, int count);
void sdstolower(sds s);
void sdstoupper(sds s);
sds sdsfromlonglong(long long value);
sds sdscatrepr(sds s, const char *p, size_t len);
sds *sdssplitargs(const char *line, int *argc);
sds sdsmapchars(sds s, const char *from, const char *to, size_t setlen);
sds sdsjoin(char **argv, int argc, char *sep);
sds sdsjoinsds(sds *argv, int argc, const char *sep, size_t seplen);
/* Low level functions exposed to the user API */
sds sdsMakeRoomFor(sds s, size_t addlen);
void sdsIncrLen(sds s, int incr);
sds sdsRemoveFreeSpace(sds s);
size_t sdsAllocSize(sds s);
void *sdsAllocPtr(sds s);
/* Export the allocator used by SDS to the program using SDS.
* Sometimes the program SDS is linked to, may use a different set of
* allocators, but may want to allocate or free things that SDS will
* respectively free or allocate. */
void *sds_malloc(size_t size);
void *sds_realloc(void *ptr, size_t size);
void sds_free(void *ptr);
#ifdef REDIS_TEST
int sdsTest(int argc, char *argv[]);
#endif
#endif

View File

@ -1,42 +0,0 @@
/* SDSLib 2.0 -- A C dynamic strings library
*
* Copyright (c) 2006-2015, Salvatore Sanfilippo <antirez at gmail dot com>
* Copyright (c) 2015, Redis Labs, Inc
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions are met:
*
* * Redistributions of source code must retain the above copyright notice,
* this list of conditions and the following disclaimer.
* * Redistributions in binary form must reproduce the above copyright
* notice, this list of conditions and the following disclaimer in the
* documentation and/or other materials provided with the distribution.
* * Neither the name of Redis nor the names of its contributors may be used
* to endorse or promote products derived from this software without
* specific prior written permission.
*
* THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
* AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
* IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
* ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE
* LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
* CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
* SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
* INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
* CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
* ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
* POSSIBILITY OF SUCH DAMAGE.
*/
/* SDS allocator selection.
*
* This file is used in order to change the SDS allocator at compile time.
* Just define the following defines to what you want to use. Also add
* the include of your alternate allocator if needed (not needed in order
* to use the default libc allocator). */
#include "zmalloc.h"
#define s_malloc zmalloc
#define s_realloc zrealloc
#define s_free zfree

View File

@ -1,733 +0,0 @@
#include "fmacros.h"
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <strings.h>
#include <sys/time.h>
#include <assert.h>
#include <unistd.h>
#include <signal.h>
#include <errno.h>
#include <limits.h>
#include "hiredis.h"
enum connection_type {
CONN_TCP,
CONN_UNIX,
CONN_FD
};
struct config {
enum connection_type type;
struct {
const char *host;
int port;
struct timeval timeout;
} tcp;
struct {
const char *path;
} unix;
};
/* The following lines make up our testing "framework" :) */
static int tests = 0, fails = 0;
#define test(_s) { printf("#%02d ", ++tests); printf(_s); }
#define test_cond(_c) if(_c) printf("\033[0;32mPASSED\033[0;0m\n"); else {printf("\033[0;31mFAILED\033[0;0m\n"); fails++;}
static long long usec(void) {
struct timeval tv;
gettimeofday(&tv,NULL);
return (((long long)tv.tv_sec)*1000000)+tv.tv_usec;
}
static redisContext *select_database(redisContext *c) {
redisReply *reply;
/* Switch to DB 9 for testing, now that we know we can chat. */
reply = redisCommand(c,"SELECT 9");
assert(reply != NULL);
freeReplyObject(reply);
/* Make sure the DB is empty */
reply = redisCommand(c,"DBSIZE");
assert(reply != NULL);
if (reply->type == REDIS_REPLY_INTEGER && reply->integer == 0) {
/* Awesome, DB 9 is empty and we can continue. */
freeReplyObject(reply);
} else {
printf("Database #9 is not empty, test can not continue\n");
exit(1);
}
return c;
}
static int disconnect(redisContext *c, int keep_fd) {
redisReply *reply;
/* Make sure we're on DB 9. */
reply = redisCommand(c,"SELECT 9");
assert(reply != NULL);
freeReplyObject(reply);
reply = redisCommand(c,"FLUSHDB");
assert(reply != NULL);
freeReplyObject(reply);
/* Free the context as well, but keep the fd if requested. */
if (keep_fd)
return redisFreeKeepFd(c);
redisFree(c);
return -1;
}
static redisContext *connect(struct config config) {
redisContext *c = NULL;
if (config.type == CONN_TCP) {
c = redisConnect(config.tcp.host, config.tcp.port);
} else if (config.type == CONN_UNIX) {
c = redisConnectUnix(config.unix.path);
} else if (config.type == CONN_FD) {
/* Create a dummy connection just to get an fd to inherit */
redisContext *dummy_ctx = redisConnectUnix(config.unix.path);
if (dummy_ctx) {
int fd = disconnect(dummy_ctx, 1);
printf("Connecting to inherited fd %d\n", fd);
c = redisConnectFd(fd);
}
} else {
assert(NULL);
}
if (c == NULL) {
printf("Connection error: can't allocate redis context\n");
exit(1);
} else if (c->err) {
printf("Connection error: %s\n", c->errstr);
exit(1);
}
return select_database(c);
}
static void test_format_commands(void) {
char *cmd;
int len;
test("Format command without interpolation: ");
len = redisFormatCommand(&cmd,"SET foo bar");
test_cond(strncmp(cmd,"*3\r\n$3\r\nSET\r\n$3\r\nfoo\r\n$3\r\nbar\r\n",len) == 0 &&
len == 4+4+(3+2)+4+(3+2)+4+(3+2));
free(cmd);
test("Format command with %%s string interpolation: ");
len = redisFormatCommand(&cmd,"SET %s %s","foo","bar");
test_cond(strncmp(cmd,"*3\r\n$3\r\nSET\r\n$3\r\nfoo\r\n$3\r\nbar\r\n",len) == 0 &&
len == 4+4+(3+2)+4+(3+2)+4+(3+2));
free(cmd);
test("Format command with %%s and an empty string: ");
len = redisFormatCommand(&cmd,"SET %s %s","foo","");
test_cond(strncmp(cmd,"*3\r\n$3\r\nSET\r\n$3\r\nfoo\r\n$0\r\n\r\n",len) == 0 &&
len == 4+4+(3+2)+4+(3+2)+4+(0+2));
free(cmd);
test("Format command with an empty string in between proper interpolations: ");
len = redisFormatCommand(&cmd,"SET %s %s","","foo");
test_cond(strncmp(cmd,"*3\r\n$3\r\nSET\r\n$0\r\n\r\n$3\r\nfoo\r\n",len) == 0 &&
len == 4+4+(3+2)+4+(0+2)+4+(3+2));
free(cmd);
test("Format command with %%b string interpolation: ");
len = redisFormatCommand(&cmd,"SET %b %b","foo",(size_t)3,"b\0r",(size_t)3);
test_cond(strncmp(cmd,"*3\r\n$3\r\nSET\r\n$3\r\nfoo\r\n$3\r\nb\0r\r\n",len) == 0 &&
len == 4+4+(3+2)+4+(3+2)+4+(3+2));
free(cmd);
test("Format command with %%b and an empty string: ");
len = redisFormatCommand(&cmd,"SET %b %b","foo",(size_t)3,"",(size_t)0);
test_cond(strncmp(cmd,"*3\r\n$3\r\nSET\r\n$3\r\nfoo\r\n$0\r\n\r\n",len) == 0 &&
len == 4+4+(3+2)+4+(3+2)+4+(0+2));
free(cmd);
test("Format command with literal %%: ");
len = redisFormatCommand(&cmd,"SET %% %%");
test_cond(strncmp(cmd,"*3\r\n$3\r\nSET\r\n$1\r\n%\r\n$1\r\n%\r\n",len) == 0 &&
len == 4+4+(3+2)+4+(1+2)+4+(1+2));
free(cmd);
/* Vararg width depends on the type. These tests make sure that the
* width is correctly determined using the format and subsequent varargs
* can correctly be interpolated. */
#define INTEGER_WIDTH_TEST(fmt, type) do { \
type value = 123; \
test("Format command with printf-delegation (" #type "): "); \
len = redisFormatCommand(&cmd,"key:%08" fmt " str:%s", value, "hello"); \
test_cond(strncmp(cmd,"*2\r\n$12\r\nkey:00000123\r\n$9\r\nstr:hello\r\n",len) == 0 && \
len == 4+5+(12+2)+4+(9+2)); \
free(cmd); \
} while(0)
#define FLOAT_WIDTH_TEST(type) do { \
type value = 123.0; \
test("Format command with printf-delegation (" #type "): "); \
len = redisFormatCommand(&cmd,"key:%08.3f str:%s", value, "hello"); \
test_cond(strncmp(cmd,"*2\r\n$12\r\nkey:0123.000\r\n$9\r\nstr:hello\r\n",len) == 0 && \
len == 4+5+(12+2)+4+(9+2)); \
free(cmd); \
} while(0)
INTEGER_WIDTH_TEST("d", int);
INTEGER_WIDTH_TEST("hhd", char);
INTEGER_WIDTH_TEST("hd", short);
INTEGER_WIDTH_TEST("ld", long);
INTEGER_WIDTH_TEST("lld", long long);
INTEGER_WIDTH_TEST("u", unsigned int);
INTEGER_WIDTH_TEST("hhu", unsigned char);
INTEGER_WIDTH_TEST("hu", unsigned short);
INTEGER_WIDTH_TEST("lu", unsigned long);
INTEGER_WIDTH_TEST("llu", unsigned long long);
FLOAT_WIDTH_TEST(float);
FLOAT_WIDTH_TEST(double);
test("Format command with invalid printf format: ");
len = redisFormatCommand(&cmd,"key:%08p %b",(void*)1234,"foo",(size_t)3);
test_cond(len == -1);
const char *argv[3];
argv[0] = "SET";
argv[1] = "foo\0xxx";
argv[2] = "bar";
size_t lens[3] = { 3, 7, 3 };
int argc = 3;
test("Format command by passing argc/argv without lengths: ");
len = redisFormatCommandArgv(&cmd,argc,argv,NULL);
test_cond(strncmp(cmd,"*3\r\n$3\r\nSET\r\n$3\r\nfoo\r\n$3\r\nbar\r\n",len) == 0 &&
len == 4+4+(3+2)+4+(3+2)+4+(3+2));
free(cmd);
test("Format command by passing argc/argv with lengths: ");
len = redisFormatCommandArgv(&cmd,argc,argv,lens);
test_cond(strncmp(cmd,"*3\r\n$3\r\nSET\r\n$7\r\nfoo\0xxx\r\n$3\r\nbar\r\n",len) == 0 &&
len == 4+4+(3+2)+4+(7+2)+4+(3+2));
free(cmd);
}
static void test_append_formatted_commands(struct config config) {
redisContext *c;
redisReply *reply;
char *cmd;
int len;
c = connect(config);
test("Append format command: ");
len = redisFormatCommand(&cmd, "SET foo bar");
test_cond(redisAppendFormattedCommand(c, cmd, len) == REDIS_OK);
assert(redisGetReply(c, (void*)&reply) == REDIS_OK);
free(cmd);
freeReplyObject(reply);
disconnect(c, 0);
}
static void test_reply_reader(void) {
redisReader *reader;
void *reply;
int ret;
int i;
test("Error handling in reply parser: ");
reader = redisReaderCreate();
redisReaderFeed(reader,(char*)"@foo\r\n",6);
ret = redisReaderGetReply(reader,NULL);
test_cond(ret == REDIS_ERR &&
strcasecmp(reader->errstr,"Protocol error, got \"@\" as reply type byte") == 0);
redisReaderFree(reader);
/* when the reply already contains multiple items, they must be free'd
* on an error. valgrind will bark when this doesn't happen. */
test("Memory cleanup in reply parser: ");
reader = redisReaderCreate();
redisReaderFeed(reader,(char*)"*2\r\n",4);
redisReaderFeed(reader,(char*)"$5\r\nhello\r\n",11);
redisReaderFeed(reader,(char*)"@foo\r\n",6);
ret = redisReaderGetReply(reader,NULL);
test_cond(ret == REDIS_ERR &&
strcasecmp(reader->errstr,"Protocol error, got \"@\" as reply type byte") == 0);
redisReaderFree(reader);
test("Set error on nested multi bulks with depth > 7: ");
reader = redisReaderCreate();
for (i = 0; i < 9; i++) {
redisReaderFeed(reader,(char*)"*1\r\n",4);
}
ret = redisReaderGetReply(reader,NULL);
test_cond(ret == REDIS_ERR &&
strncasecmp(reader->errstr,"No support for",14) == 0);
redisReaderFree(reader);
test("Works with NULL functions for reply: ");
reader = redisReaderCreate();
reader->fn = NULL;
redisReaderFeed(reader,(char*)"+OK\r\n",5);
ret = redisReaderGetReply(reader,&reply);
test_cond(ret == REDIS_OK && reply == (void*)REDIS_REPLY_STATUS);
redisReaderFree(reader);
test("Works when a single newline (\\r\\n) covers two calls to feed: ");
reader = redisReaderCreate();
reader->fn = NULL;
redisReaderFeed(reader,(char*)"+OK\r",4);
ret = redisReaderGetReply(reader,&reply);
assert(ret == REDIS_OK && reply == NULL);
redisReaderFeed(reader,(char*)"\n",1);
ret = redisReaderGetReply(reader,&reply);
test_cond(ret == REDIS_OK && reply == (void*)REDIS_REPLY_STATUS);
redisReaderFree(reader);
test("Don't reset state after protocol error: ");
reader = redisReaderCreate();
reader->fn = NULL;
redisReaderFeed(reader,(char*)"x",1);
ret = redisReaderGetReply(reader,&reply);
assert(ret == REDIS_ERR);
ret = redisReaderGetReply(reader,&reply);
test_cond(ret == REDIS_ERR && reply == NULL);
redisReaderFree(reader);
/* Regression test for issue #45 on GitHub. */
test("Don't do empty allocation for empty multi bulk: ");
reader = redisReaderCreate();
redisReaderFeed(reader,(char*)"*0\r\n",4);
ret = redisReaderGetReply(reader,&reply);
test_cond(ret == REDIS_OK &&
((redisReply*)reply)->type == REDIS_REPLY_ARRAY &&
((redisReply*)reply)->elements == 0);
freeReplyObject(reply);
redisReaderFree(reader);
}
static void test_blocking_connection_errors(void) {
redisContext *c;
test("Returns error when host cannot be resolved: ");
c = redisConnect((char*)"idontexist.local", 6379);
test_cond(c->err == REDIS_ERR_OTHER &&
(strcmp(c->errstr,"Name or service not known") == 0 ||
strcmp(c->errstr,"Can't resolve: idontexist.local") == 0 ||
strcmp(c->errstr,"nodename nor servname provided, or not known") == 0 ||
strcmp(c->errstr,"No address associated with hostname") == 0 ||
strcmp(c->errstr,"no address associated with name") == 0));
redisFree(c);
test("Returns error when the port is not open: ");
c = redisConnect((char*)"localhost", 1);
test_cond(c->err == REDIS_ERR_IO &&
strcmp(c->errstr,"Connection refused") == 0);
redisFree(c);
test("Returns error when the unix socket path doesn't accept connections: ");
c = redisConnectUnix((char*)"/tmp/idontexist.sock");
test_cond(c->err == REDIS_ERR_IO); /* Don't care about the message... */
redisFree(c);
}
static void test_blocking_connection(struct config config) {
redisContext *c;
redisReply *reply;
c = connect(config);
test("Is able to deliver commands: ");
reply = redisCommand(c,"PING");
test_cond(reply->type == REDIS_REPLY_STATUS &&
strcasecmp(reply->str,"pong") == 0)
freeReplyObject(reply);
test("Is a able to send commands verbatim: ");
reply = redisCommand(c,"SET foo bar");
test_cond (reply->type == REDIS_REPLY_STATUS &&
strcasecmp(reply->str,"ok") == 0)
freeReplyObject(reply);
test("%%s String interpolation works: ");
reply = redisCommand(c,"SET %s %s","foo","hello world");
freeReplyObject(reply);
reply = redisCommand(c,"GET foo");
test_cond(reply->type == REDIS_REPLY_STRING &&
strcmp(reply->str,"hello world") == 0);
freeReplyObject(reply);
test("%%b String interpolation works: ");
reply = redisCommand(c,"SET %b %b","foo",(size_t)3,"hello\x00world",(size_t)11);
freeReplyObject(reply);
reply = redisCommand(c,"GET foo");
test_cond(reply->type == REDIS_REPLY_STRING &&
memcmp(reply->str,"hello\x00world",11) == 0)
test("Binary reply length is correct: ");
test_cond(reply->len == 11)
freeReplyObject(reply);
test("Can parse nil replies: ");
reply = redisCommand(c,"GET nokey");
test_cond(reply->type == REDIS_REPLY_NIL)
freeReplyObject(reply);
/* test 7 */
test("Can parse integer replies: ");
reply = redisCommand(c,"INCR mycounter");
test_cond(reply->type == REDIS_REPLY_INTEGER && reply->integer == 1)
freeReplyObject(reply);
test("Can parse multi bulk replies: ");
freeReplyObject(redisCommand(c,"LPUSH mylist foo"));
freeReplyObject(redisCommand(c,"LPUSH mylist bar"));
reply = redisCommand(c,"LRANGE mylist 0 -1");
test_cond(reply->type == REDIS_REPLY_ARRAY &&
reply->elements == 2 &&
!memcmp(reply->element[0]->str,"bar",3) &&
!memcmp(reply->element[1]->str,"foo",3))
freeReplyObject(reply);
/* m/e with multi bulk reply *before* other reply.
* specifically test ordering of reply items to parse. */
test("Can handle nested multi bulk replies: ");
freeReplyObject(redisCommand(c,"MULTI"));
freeReplyObject(redisCommand(c,"LRANGE mylist 0 -1"));
freeReplyObject(redisCommand(c,"PING"));
reply = (redisCommand(c,"EXEC"));
test_cond(reply->type == REDIS_REPLY_ARRAY &&
reply->elements == 2 &&
reply->element[0]->type == REDIS_REPLY_ARRAY &&
reply->element[0]->elements == 2 &&
!memcmp(reply->element[0]->element[0]->str,"bar",3) &&
!memcmp(reply->element[0]->element[1]->str,"foo",3) &&
reply->element[1]->type == REDIS_REPLY_STATUS &&
strcasecmp(reply->element[1]->str,"pong") == 0);
freeReplyObject(reply);
disconnect(c, 0);
}
static void test_blocking_io_errors(struct config config) {
redisContext *c;
redisReply *reply;
void *_reply;
int major, minor;
/* Connect to target given by config. */
c = connect(config);
{
/* Find out Redis version to determine the path for the next test */
const char *field = "redis_version:";
char *p, *eptr;
reply = redisCommand(c,"INFO");
p = strstr(reply->str,field);
major = strtol(p+strlen(field),&eptr,10);
p = eptr+1; /* char next to the first "." */
minor = strtol(p,&eptr,10);
freeReplyObject(reply);
}
test("Returns I/O error when the connection is lost: ");
reply = redisCommand(c,"QUIT");
if (major >= 2 && minor > 0) {
/* > 2.0 returns OK on QUIT and read() should be issued once more
* to know the descriptor is at EOF. */
test_cond(strcasecmp(reply->str,"OK") == 0 &&
redisGetReply(c,&_reply) == REDIS_ERR);
freeReplyObject(reply);
} else {
test_cond(reply == NULL);
}
/* On 2.0, QUIT will cause the connection to be closed immediately and
* the read(2) for the reply on QUIT will set the error to EOF.
* On >2.0, QUIT will return with OK and another read(2) needed to be
* issued to find out the socket was closed by the server. In both
* conditions, the error will be set to EOF. */
assert(c->err == REDIS_ERR_EOF &&
strcmp(c->errstr,"Server closed the connection") == 0);
redisFree(c);
c = connect(config);
test("Returns I/O error on socket timeout: ");
struct timeval tv = { 0, 1000 };
assert(redisSetTimeout(c,tv) == REDIS_OK);
test_cond(redisGetReply(c,&_reply) == REDIS_ERR &&
c->err == REDIS_ERR_IO && errno == EAGAIN);
redisFree(c);
}
static void test_invalid_timeout_errors(struct config config) {
redisContext *c;
test("Set error when an invalid timeout usec value is given to redisConnectWithTimeout: ");
config.tcp.timeout.tv_sec = 0;
config.tcp.timeout.tv_usec = 10000001;
c = redisConnectWithTimeout(config.tcp.host, config.tcp.port, config.tcp.timeout);
test_cond(c->err == REDIS_ERR_IO);
test("Set error when an invalid timeout sec value is given to redisConnectWithTimeout: ");
config.tcp.timeout.tv_sec = (((LONG_MAX) - 999) / 1000) + 1;
config.tcp.timeout.tv_usec = 0;
c = redisConnectWithTimeout(config.tcp.host, config.tcp.port, config.tcp.timeout);
test_cond(c->err == REDIS_ERR_IO);
redisFree(c);
}
static void test_throughput(struct config config) {
redisContext *c = connect(config);
redisReply **replies;
int i, num;
long long t1, t2;
test("Throughput:\n");
for (i = 0; i < 500; i++)
freeReplyObject(redisCommand(c,"LPUSH mylist foo"));
num = 1000;
replies = malloc(sizeof(redisReply*)*num);
t1 = usec();
for (i = 0; i < num; i++) {
replies[i] = redisCommand(c,"PING");
assert(replies[i] != NULL && replies[i]->type == REDIS_REPLY_STATUS);
}
t2 = usec();
for (i = 0; i < num; i++) freeReplyObject(replies[i]);
free(replies);
printf("\t(%dx PING: %.3fs)\n", num, (t2-t1)/1000000.0);
replies = malloc(sizeof(redisReply*)*num);
t1 = usec();
for (i = 0; i < num; i++) {
replies[i] = redisCommand(c,"LRANGE mylist 0 499");
assert(replies[i] != NULL && replies[i]->type == REDIS_REPLY_ARRAY);
assert(replies[i] != NULL && replies[i]->elements == 500);
}
t2 = usec();
for (i = 0; i < num; i++) freeReplyObject(replies[i]);
free(replies);
printf("\t(%dx LRANGE with 500 elements: %.3fs)\n", num, (t2-t1)/1000000.0);
num = 10000;
replies = malloc(sizeof(redisReply*)*num);
for (i = 0; i < num; i++)
redisAppendCommand(c,"PING");
t1 = usec();
for (i = 0; i < num; i++) {
assert(redisGetReply(c, (void*)&replies[i]) == REDIS_OK);
assert(replies[i] != NULL && replies[i]->type == REDIS_REPLY_STATUS);
}
t2 = usec();
for (i = 0; i < num; i++) freeReplyObject(replies[i]);
free(replies);
printf("\t(%dx PING (pipelined): %.3fs)\n", num, (t2-t1)/1000000.0);
replies = malloc(sizeof(redisReply*)*num);
for (i = 0; i < num; i++)
redisAppendCommand(c,"LRANGE mylist 0 499");
t1 = usec();
for (i = 0; i < num; i++) {
assert(redisGetReply(c, (void*)&replies[i]) == REDIS_OK);
assert(replies[i] != NULL && replies[i]->type == REDIS_REPLY_ARRAY);
assert(replies[i] != NULL && replies[i]->elements == 500);
}
t2 = usec();
for (i = 0; i < num; i++) freeReplyObject(replies[i]);
free(replies);
printf("\t(%dx LRANGE with 500 elements (pipelined): %.3fs)\n", num, (t2-t1)/1000000.0);
disconnect(c, 0);
}
// static long __test_callback_flags = 0;
// static void __test_callback(redisContext *c, void *privdata) {
// ((void)c);
// /* Shift to detect execution order */
// __test_callback_flags <<= 8;
// __test_callback_flags |= (long)privdata;
// }
//
// static void __test_reply_callback(redisContext *c, redisReply *reply, void *privdata) {
// ((void)c);
// /* Shift to detect execution order */
// __test_callback_flags <<= 8;
// __test_callback_flags |= (long)privdata;
// if (reply) freeReplyObject(reply);
// }
//
// static redisContext *__connect_nonblock() {
// /* Reset callback flags */
// __test_callback_flags = 0;
// return redisConnectNonBlock("127.0.0.1", port, NULL);
// }
//
// static void test_nonblocking_connection() {
// redisContext *c;
// int wdone = 0;
//
// test("Calls command callback when command is issued: ");
// c = __connect_nonblock();
// redisSetCommandCallback(c,__test_callback,(void*)1);
// redisCommand(c,"PING");
// test_cond(__test_callback_flags == 1);
// redisFree(c);
//
// test("Calls disconnect callback on redisDisconnect: ");
// c = __connect_nonblock();
// redisSetDisconnectCallback(c,__test_callback,(void*)2);
// redisDisconnect(c);
// test_cond(__test_callback_flags == 2);
// redisFree(c);
//
// test("Calls disconnect callback and free callback on redisFree: ");
// c = __connect_nonblock();
// redisSetDisconnectCallback(c,__test_callback,(void*)2);
// redisSetFreeCallback(c,__test_callback,(void*)4);
// redisFree(c);
// test_cond(__test_callback_flags == ((2 << 8) | 4));
//
// test("redisBufferWrite against empty write buffer: ");
// c = __connect_nonblock();
// test_cond(redisBufferWrite(c,&wdone) == REDIS_OK && wdone == 1);
// redisFree(c);
//
// test("redisBufferWrite against not yet connected fd: ");
// c = __connect_nonblock();
// redisCommand(c,"PING");
// test_cond(redisBufferWrite(c,NULL) == REDIS_ERR &&
// strncmp(c->error,"write:",6) == 0);
// redisFree(c);
//
// test("redisBufferWrite against closed fd: ");
// c = __connect_nonblock();
// redisCommand(c,"PING");
// redisDisconnect(c);
// test_cond(redisBufferWrite(c,NULL) == REDIS_ERR &&
// strncmp(c->error,"write:",6) == 0);
// redisFree(c);
//
// test("Process callbacks in the right sequence: ");
// c = __connect_nonblock();
// redisCommandWithCallback(c,__test_reply_callback,(void*)1,"PING");
// redisCommandWithCallback(c,__test_reply_callback,(void*)2,"PING");
// redisCommandWithCallback(c,__test_reply_callback,(void*)3,"PING");
//
// /* Write output buffer */
// wdone = 0;
// while(!wdone) {
// usleep(500);
// redisBufferWrite(c,&wdone);
// }
//
// /* Read until at least one callback is executed (the 3 replies will
// * arrive in a single packet, causing all callbacks to be executed in
// * a single pass). */
// while(__test_callback_flags == 0) {
// assert(redisBufferRead(c) == REDIS_OK);
// redisProcessCallbacks(c);
// }
// test_cond(__test_callback_flags == 0x010203);
// redisFree(c);
//
// test("redisDisconnect executes pending callbacks with NULL reply: ");
// c = __connect_nonblock();
// redisSetDisconnectCallback(c,__test_callback,(void*)1);
// redisCommandWithCallback(c,__test_reply_callback,(void*)2,"PING");
// redisDisconnect(c);
// test_cond(__test_callback_flags == 0x0201);
// redisFree(c);
// }
int main(int argc, char **argv) {
struct config cfg = {
.tcp = {
.host = "127.0.0.1",
.port = 6379
},
.unix = {
.path = "/tmp/redis.sock"
}
};
int throughput = 1;
int test_inherit_fd = 1;
/* Ignore broken pipe signal (for I/O error tests). */
signal(SIGPIPE, SIG_IGN);
/* Parse command line options. */
argv++; argc--;
while (argc) {
if (argc >= 2 && !strcmp(argv[0],"-h")) {
argv++; argc--;
cfg.tcp.host = argv[0];
} else if (argc >= 2 && !strcmp(argv[0],"-p")) {
argv++; argc--;
cfg.tcp.port = atoi(argv[0]);
} else if (argc >= 2 && !strcmp(argv[0],"-s")) {
argv++; argc--;
cfg.unix.path = argv[0];
} else if (argc >= 1 && !strcmp(argv[0],"--skip-throughput")) {
throughput = 0;
} else if (argc >= 1 && !strcmp(argv[0],"--skip-inherit-fd")) {
test_inherit_fd = 0;
} else {
fprintf(stderr, "Invalid argument: %s\n", argv[0]);
exit(1);
}
argv++; argc--;
}
test_format_commands();
test_reply_reader();
test_blocking_connection_errors();
printf("\nTesting against TCP connection (%s:%d):\n", cfg.tcp.host, cfg.tcp.port);
cfg.type = CONN_TCP;
test_blocking_connection(cfg);
test_blocking_io_errors(cfg);
test_invalid_timeout_errors(cfg);
test_append_formatted_commands(cfg);
if (throughput) test_throughput(cfg);
printf("\nTesting against Unix socket connection (%s):\n", cfg.unix.path);
cfg.type = CONN_UNIX;
test_blocking_connection(cfg);
test_blocking_io_errors(cfg);
if (throughput) test_throughput(cfg);
if (test_inherit_fd) {
printf("\nTesting against inherited fd (%s):\n", cfg.unix.path);
cfg.type = CONN_FD;
test_blocking_connection(cfg);
}
if (fails) {
printf("*** %d TESTS FAILED ***\n", fails);
return 1;
}
printf("ALL TESTS PASSED\n");
return 0;
}

View File

@ -1,13 +0,0 @@
/* Drop in replacement for zmalloc.h in order to just use libc malloc without
* any wrappering. */
#ifndef ZMALLOC_H
#define ZMALLOC_H
#define zmalloc malloc
#define zrealloc realloc
#define zcalloc(x) calloc(x,1)
#define zfree free
#define zstrdup strdup
#endif

View File

@ -1,3 +0,0 @@
begin-language: "Autoconf-without-aclocal-m4"
args: --no-cache
end-language: "Autoconf-without-aclocal-m4"

View File

@ -1 +0,0 @@
* text=auto eol=lf

View File

@ -1,75 +0,0 @@
/*.gcov.*
/bin/jemalloc-config
/bin/jemalloc.sh
/bin/jeprof
/config.stamp
/config.log
/config.status
/configure
/doc/html.xsl
/doc/manpages.xsl
/doc/jemalloc.xml
/doc/jemalloc.html
/doc/jemalloc.3
/jemalloc.pc
/lib/
/Makefile
/include/jemalloc/internal/jemalloc_internal.h
/include/jemalloc/internal/jemalloc_internal_defs.h
/include/jemalloc/internal/private_namespace.h
/include/jemalloc/internal/private_unnamespace.h
/include/jemalloc/internal/public_namespace.h
/include/jemalloc/internal/public_symbols.txt
/include/jemalloc/internal/public_unnamespace.h
/include/jemalloc/internal/size_classes.h
/include/jemalloc/jemalloc.h
/include/jemalloc/jemalloc_defs.h
/include/jemalloc/jemalloc_macros.h
/include/jemalloc/jemalloc_mangle.h
/include/jemalloc/jemalloc_mangle_jet.h
/include/jemalloc/jemalloc_protos.h
/include/jemalloc/jemalloc_protos_jet.h
/include/jemalloc/jemalloc_rename.h
/include/jemalloc/jemalloc_typedefs.h
/src/*.[od]
/src/*.gcda
/src/*.gcno
/test/test.sh
test/include/test/jemalloc_test.h
test/include/test/jemalloc_test_defs.h
/test/integration/[A-Za-z]*
!/test/integration/[A-Za-z]*.*
/test/integration/*.[od]
/test/integration/*.gcda
/test/integration/*.gcno
/test/integration/*.out
/test/src/*.[od]
/test/src/*.gcda
/test/src/*.gcno
/test/stress/[A-Za-z]*
!/test/stress/[A-Za-z]*.*
/test/stress/*.[od]
/test/stress/*.gcda
/test/stress/*.gcno
/test/stress/*.out
/test/unit/[A-Za-z]*
!/test/unit/[A-Za-z]*.*
/test/unit/*.[od]
/test/unit/*.gcda
/test/unit/*.gcno
/test/unit/*.out
/VERSION

View File

@ -1,27 +0,0 @@
Unless otherwise specified, files in the jemalloc source distribution are
subject to the following license:
--------------------------------------------------------------------------------
Copyright (C) 2002-2015 Jason Evans <jasone@canonware.com>.
All rights reserved.
Copyright (C) 2007-2012 Mozilla Foundation. All rights reserved.
Copyright (C) 2009-2015 Facebook, Inc. All rights reserved.
Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions are met:
1. Redistributions of source code must retain the above copyright notice(s),
this list of conditions and the following disclaimer.
2. Redistributions in binary form must reproduce the above copyright notice(s),
this list of conditions and the following disclaimer in the documentation
and/or other materials provided with the distribution.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDER(S) ``AS IS'' AND ANY EXPRESS
OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF
MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO
EVENT SHALL THE COPYRIGHT HOLDER(S) BE LIABLE FOR ANY DIRECT, INDIRECT,
INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR
PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF
LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE
OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF
ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
--------------------------------------------------------------------------------

View File

@ -1,788 +0,0 @@
Following are change highlights associated with official releases. Important
bug fixes are all mentioned, but some internal enhancements are omitted here for
brevity. Much more detail can be found in the git revision history:
https://github.com/jemalloc/jemalloc
* 4.0.3 (September 24, 2015)
This bugfix release continues the trend of xallocx() and heap profiling fixes.
Bug fixes:
- Fix xallocx(..., MALLOCX_ZERO) to zero all trailing bytes of large
allocations when --enable-cache-oblivious configure option is enabled.
- Fix xallocx(..., MALLOCX_ZERO) to zero trailing bytes of huge allocations
when resizing from/to a size class that is not a multiple of the chunk size.
- Fix prof_tctx_dump_iter() to filter out nodes that were created after heap
profile dumping started.
- Work around a potentially bad thread-specific data initialization
interaction with NPTL (glibc's pthreads implementation).
* 4.0.2 (September 21, 2015)
This bugfix release addresses a few bugs specific to heap profiling.
Bug fixes:
- Fix ixallocx_prof_sample() to never modify nor create sampled small
allocations. xallocx() is in general incapable of moving small allocations,
so this fix removes buggy code without loss of generality.
- Fix irallocx_prof_sample() to always allocate large regions, even when
alignment is non-zero.
- Fix prof_alloc_rollback() to read tdata from thread-specific data rather
than dereferencing a potentially invalid tctx.
* 4.0.1 (September 15, 2015)
This is a bugfix release that is somewhat high risk due to the amount of
refactoring required to address deep xallocx() problems. As a side effect of
these fixes, xallocx() now tries harder to partially fulfill requests for
optional extra space. Note that a couple of minor heap profiling
optimizations are included, but these are better thought of as performance
fixes that were integral to disovering most of the other bugs.
Optimizations:
- Avoid a chunk metadata read in arena_prof_tctx_set(), since it is in the
fast path when heap profiling is enabled. Additionally, split a special
case out into arena_prof_tctx_reset(), which also avoids chunk metadata
reads.
- Optimize irallocx_prof() to optimistically update the sampler state. The
prior implementation appears to have been a holdover from when
rallocx()/xallocx() functionality was combined as rallocm().
Bug fixes:
- Fix TLS configuration such that it is enabled by default for platforms on
which it works correctly.
- Fix arenas_cache_cleanup() and arena_get_hard() to handle
allocation/deallocation within the application's thread-specific data
cleanup functions even after arenas_cache is torn down.
- Fix xallocx() bugs related to size+extra exceeding HUGE_MAXCLASS.
- Fix chunk purge hook calls for in-place huge shrinking reallocation to
specify the old chunk size rather than the new chunk size. This bug caused
no correctness issues for the default chunk purge function, but was
visible to custom functions set via the "arena.<i>.chunk_hooks" mallctl.
- Fix heap profiling bugs:
+ Fix heap profiling to distinguish among otherwise identical sample sites
with interposed resets (triggered via the "prof.reset" mallctl). This bug
could cause data structure corruption that would most likely result in a
segfault.
+ Fix irealloc_prof() to prof_alloc_rollback() on OOM.
+ Make one call to prof_active_get_unlocked() per allocation event, and use
the result throughout the relevant functions that handle an allocation
event. Also add a missing check in prof_realloc(). These fixes protect
allocation events against concurrent prof_active changes.
+ Fix ixallocx_prof() to pass usize_max and zero to ixallocx_prof_sample()
in the correct order.
+ Fix prof_realloc() to call prof_free_sampled_object() after calling
prof_malloc_sample_object(). Prior to this fix, if tctx and old_tctx were
the same, the tctx could have been prematurely destroyed.
- Fix portability bugs:
+ Don't bitshift by negative amounts when encoding/decoding run sizes in
chunk header maps. This affected systems with page sizes greater than 8
KiB.
+ Rename index_t to szind_t to avoid an existing type on Solaris.
+ Add JEMALLOC_CXX_THROW to the memalign() function prototype, in order to
match glibc and avoid compilation errors when including both
jemalloc/jemalloc.h and malloc.h in C++ code.
+ Don't assume that /bin/sh is appropriate when running size_classes.sh
during configuration.
+ Consider __sparcv9 a synonym for __sparc64__ when defining LG_QUANTUM.
+ Link tests to librt if it contains clock_gettime(2).
* 4.0.0 (August 17, 2015)
This version contains many speed and space optimizations, both minor and
major. The major themes are generalization, unification, and simplification.
Although many of these optimizations cause no visible behavior change, their
cumulative effect is substantial.
New features:
- Normalize size class spacing to be consistent across the complete size
range. By default there are four size classes per size doubling, but this
is now configurable via the --with-lg-size-class-group option. Also add the
--with-lg-page, --with-lg-page-sizes, --with-lg-quantum, and
--with-lg-tiny-min options, which can be used to tweak page and size class
settings. Impacts:
+ Worst case performance for incrementally growing/shrinking reallocation
is improved because there are far fewer size classes, and therefore
copying happens less often.
+ Internal fragmentation is limited to 20% for all but the smallest size
classes (those less than four times the quantum). (1B + 4 KiB)
and (1B + 4 MiB) previously suffered nearly 50% internal fragmentation.
+ Chunk fragmentation tends to be lower because there are fewer distinct run
sizes to pack.
- Add support for explicit tcaches. The "tcache.create", "tcache.flush", and
"tcache.destroy" mallctls control tcache lifetime and flushing, and the
MALLOCX_TCACHE(tc) and MALLOCX_TCACHE_NONE flags to the *allocx() API
control which tcache is used for each operation.
- Implement per thread heap profiling, as well as the ability to
enable/disable heap profiling on a per thread basis. Add the "prof.reset",
"prof.lg_sample", "thread.prof.name", "thread.prof.active",
"opt.prof_thread_active_init", "prof.thread_active_init", and
"thread.prof.active" mallctls.
- Add support for per arena application-specified chunk allocators, configured
via the "arena.<i>.chunk_hooks" mallctl.
- Refactor huge allocation to be managed by arenas, so that arenas now
function as general purpose independent allocators. This is important in
the context of user-specified chunk allocators, aside from the scalability
benefits. Related new statistics:
+ The "stats.arenas.<i>.huge.allocated", "stats.arenas.<i>.huge.nmalloc",
"stats.arenas.<i>.huge.ndalloc", and "stats.arenas.<i>.huge.nrequests"
mallctls provide high level per arena huge allocation statistics.
+ The "arenas.nhchunks", "arenas.hchunk.<i>.size",
"stats.arenas.<i>.hchunks.<j>.nmalloc",
"stats.arenas.<i>.hchunks.<j>.ndalloc",
"stats.arenas.<i>.hchunks.<j>.nrequests", and
"stats.arenas.<i>.hchunks.<j>.curhchunks" mallctls provide per size class
statistics.
- Add the 'util' column to malloc_stats_print() output, which reports the
proportion of available regions that are currently in use for each small
size class.
- Add "alloc" and "free" modes for for junk filling (see the "opt.junk"
mallctl), so that it is possible to separately enable junk filling for
allocation versus deallocation.
- Add the jemalloc-config script, which provides information about how
jemalloc was configured, and how to integrate it into application builds.
- Add metadata statistics, which are accessible via the "stats.metadata",
"stats.arenas.<i>.metadata.mapped", and
"stats.arenas.<i>.metadata.allocated" mallctls.
- Add the "stats.resident" mallctl, which reports the upper limit of
physically resident memory mapped by the allocator.
- Add per arena control over unused dirty page purging, via the
"arenas.lg_dirty_mult", "arena.<i>.lg_dirty_mult", and
"stats.arenas.<i>.lg_dirty_mult" mallctls.
- Add the "prof.gdump" mallctl, which makes it possible to toggle the gdump
feature on/off during program execution.
- Add sdallocx(), which implements sized deallocation. The primary
optimization over dallocx() is the removal of a metadata read, which often
suffers an L1 cache miss.
- Add missing header includes in jemalloc/jemalloc.h, so that applications
only have to #include <jemalloc/jemalloc.h>.
- Add support for additional platforms:
+ Bitrig
+ Cygwin
+ DragonFlyBSD
+ iOS
+ OpenBSD
+ OpenRISC/or1k
Optimizations:
- Maintain dirty runs in per arena LRUs rather than in per arena trees of
dirty-run-containing chunks. In practice this change significantly reduces
dirty page purging volume.
- Integrate whole chunks into the unused dirty page purging machinery. This
reduces the cost of repeated huge allocation/deallocation, because it
effectively introduces a cache of chunks.
- Split the arena chunk map into two separate arrays, in order to increase
cache locality for the frequently accessed bits.
- Move small run metadata out of runs, into arena chunk headers. This reduces
run fragmentation, smaller runs reduce external fragmentation for small size
classes, and packed (less uniformly aligned) metadata layout improves CPU
cache set distribution.
- Randomly distribute large allocation base pointer alignment relative to page
boundaries in order to more uniformly utilize CPU cache sets. This can be
disabled via the --disable-cache-oblivious configure option, and queried via
the "config.cache_oblivious" mallctl.
- Micro-optimize the fast paths for the public API functions.
- Refactor thread-specific data to reside in a single structure. This assures
that only a single TLS read is necessary per call into the public API.
- Implement in-place huge allocation growing and shrinking.
- Refactor rtree (radix tree for chunk lookups) to be lock-free, and make
additional optimizations that reduce maximum lookup depth to one or two
levels. This resolves what was a concurrency bottleneck for per arena huge
allocation, because a global data structure is critical for determining
which arenas own which huge allocations.
Incompatible changes:
- Replace --enable-cc-silence with --disable-cc-silence to suppress spurious
warnings by default.
- Assure that the constness of malloc_usable_size()'s return type matches that
of the system implementation.
- Change the heap profile dump format to support per thread heap profiling,
rename pprof to jeprof, and enhance it with the --thread=<n> option. As a
result, the bundled jeprof must now be used rather than the upstream
(gperftools) pprof.
- Disable "opt.prof_final" by default, in order to avoid atexit(3), which can
internally deadlock on some platforms.
- Change the "arenas.nlruns" mallctl type from size_t to unsigned.
- Replace the "stats.arenas.<i>.bins.<j>.allocated" mallctl with
"stats.arenas.<i>.bins.<j>.curregs".
- Ignore MALLOC_CONF in set{uid,gid,cap} binaries.
- Ignore MALLOCX_ARENA(a) in dallocx(), in favor of using the
MALLOCX_TCACHE(tc) and MALLOCX_TCACHE_NONE flags to control tcache usage.
Removed features:
- Remove the *allocm() API, which is superseded by the *allocx() API.
- Remove the --enable-dss options, and make dss non-optional on all platforms
which support sbrk(2).
- Remove the "arenas.purge" mallctl, which was obsoleted by the
"arena.<i>.purge" mallctl in 3.1.0.
- Remove the unnecessary "opt.valgrind" mallctl; jemalloc automatically
detects whether it is running inside Valgrind.
- Remove the "stats.huge.allocated", "stats.huge.nmalloc", and
"stats.huge.ndalloc" mallctls.
- Remove the --enable-mremap option.
- Remove the "stats.chunks.current", "stats.chunks.total", and
"stats.chunks.high" mallctls.
Bug fixes:
- Fix the cactive statistic to decrease (rather than increase) when active
memory decreases. This regression was first released in 3.5.0.
- Fix OOM handling in memalign() and valloc(). A variant of this bug existed
in all releases since 2.0.0, which introduced these functions.
- Fix an OOM-related regression in arena_tcache_fill_small(), which could
cause cache corruption on OOM. This regression was present in all releases
from 2.2.0 through 3.6.0.
- Fix size class overflow handling for malloc(), posix_memalign(), memalign(),
calloc(), and realloc() when profiling is enabled.
- Fix the "arena.<i>.dss" mallctl to return an error if "primary" or
"secondary" precedence is specified, but sbrk(2) is not supported.
- Fix fallback lg_floor() implementations to handle extremely large inputs.
- Ensure the default purgeable zone is after the default zone on OS X.
- Fix latent bugs in atomic_*().
- Fix the "arena.<i>.dss" mallctl to handle read-only calls.
- Fix tls_model configuration to enable the initial-exec model when possible.
- Mark malloc_conf as a weak symbol so that the application can override it.
- Correctly detect glibc's adaptive pthread mutexes.
- Fix the --without-export configure option.
* 3.6.0 (March 31, 2014)
This version contains a critical bug fix for a regression present in 3.5.0 and
3.5.1.
Bug fixes:
- Fix a regression in arena_chunk_alloc() that caused crashes during
small/large allocation if chunk allocation failed. In the absence of this
bug, chunk allocation failure would result in allocation failure, e.g. NULL
return from malloc(). This regression was introduced in 3.5.0.
- Fix backtracing for gcc intrinsics-based backtracing by specifying
-fno-omit-frame-pointer to gcc. Note that the application (and all the
libraries it links to) must also be compiled with this option for
backtracing to be reliable.
- Use dss allocation precedence for huge allocations as well as small/large
allocations.
- Fix test assertion failure message formatting. This bug did not manifest on
x86_64 systems because of implementation subtleties in va_list.
- Fix inconsequential test failures for hash and SFMT code.
New features:
- Support heap profiling on FreeBSD. This feature depends on the proc
filesystem being mounted during heap profile dumping.
* 3.5.1 (February 25, 2014)
This version primarily addresses minor bugs in test code.
Bug fixes:
- Configure Solaris/Illumos to use MADV_FREE.
- Fix junk filling for mremap(2)-based huge reallocation. This is only
relevant if configuring with the --enable-mremap option specified.
- Avoid compilation failure if 'restrict' C99 keyword is not supported by the
compiler.
- Add a configure test for SSE2 rather than assuming it is usable on i686
systems. This fixes test compilation errors, especially on 32-bit Linux
systems.
- Fix mallctl argument size mismatches (size_t vs. uint64_t) in the stats unit
test.
- Fix/remove flawed alignment-related overflow tests.
- Prevent compiler optimizations that could change backtraces in the
prof_accum unit test.
* 3.5.0 (January 22, 2014)
This version focuses on refactoring and automated testing, though it also
includes some non-trivial heap profiling optimizations not mentioned below.
New features:
- Add the *allocx() API, which is a successor to the experimental *allocm()
API. The *allocx() functions are slightly simpler to use because they have
fewer parameters, they directly return the results of primary interest, and
mallocx()/rallocx() avoid the strict aliasing pitfall that
allocm()/rallocm() share with posix_memalign(). Note that *allocm() is
slated for removal in the next non-bugfix release.
- Add support for LinuxThreads.
Bug fixes:
- Unless heap profiling is enabled, disable floating point code and don't link
with libm. This, in combination with e.g. EXTRA_CFLAGS=-mno-sse on x64
systems, makes it possible to completely disable floating point register
use. Some versions of glibc neglect to save/restore caller-saved floating
point registers during dynamic lazy symbol loading, and the symbol loading
code uses whatever malloc the application happens to have linked/loaded
with, the result being potential floating point register corruption.
- Report ENOMEM rather than EINVAL if an OOM occurs during heap profiling
backtrace creation in imemalign(). This bug impacted posix_memalign() and
aligned_alloc().
- Fix a file descriptor leak in a prof_dump_maps() error path.
- Fix prof_dump() to close the dump file descriptor for all relevant error
paths.
- Fix rallocm() to use the arena specified by the ALLOCM_ARENA(s) flag for
allocation, not just deallocation.
- Fix a data race for large allocation stats counters.
- Fix a potential infinite loop during thread exit. This bug occurred on
Solaris, and could affect other platforms with similar pthreads TSD
implementations.
- Don't junk-fill reallocations unless usable size changes. This fixes a
violation of the *allocx()/*allocm() semantics.
- Fix growing large reallocation to junk fill new space.
- Fix huge deallocation to junk fill when munmap is disabled.
- Change the default private namespace prefix from empty to je_, and change
--with-private-namespace-prefix so that it prepends an additional prefix
rather than replacing je_. This reduces the likelihood of applications
which statically link jemalloc experiencing symbol name collisions.
- Add missing private namespace mangling (relevant when
--with-private-namespace is specified).
- Add and use JEMALLOC_INLINE_C so that static inline functions are marked as
static even for debug builds.
- Add a missing mutex unlock in a malloc_init_hard() error path. In practice
this error path is never executed.
- Fix numerous bugs in malloc_strotumax() error handling/reporting. These
bugs had no impact except for malformed inputs.
- Fix numerous bugs in malloc_snprintf(). These bugs were not exercised by
existing calls, so they had no impact.
* 3.4.1 (October 20, 2013)
Bug fixes:
- Fix a race in the "arenas.extend" mallctl that could cause memory corruption
of internal data structures and subsequent crashes.
- Fix Valgrind integration flaws that caused Valgrind warnings about reads of
uninitialized memory in:
+ arena chunk headers
+ internal zero-initialized data structures (relevant to tcache and prof
code)
- Preserve errno during the first allocation. A readlink(2) call during
initialization fails unless /etc/malloc.conf exists, so errno was typically
set during the first allocation prior to this fix.
- Fix compilation warnings reported by gcc 4.8.1.
* 3.4.0 (June 2, 2013)
This version is essentially a small bugfix release, but the addition of
aarch64 support requires that the minor version be incremented.
Bug fixes:
- Fix race-triggered deadlocks in chunk_record(). These deadlocks were
typically triggered by multiple threads concurrently deallocating huge
objects.
New features:
- Add support for the aarch64 architecture.
* 3.3.1 (March 6, 2013)
This version fixes bugs that are typically encountered only when utilizing
custom run-time options.
Bug fixes:
- Fix a locking order bug that could cause deadlock during fork if heap
profiling were enabled.
- Fix a chunk recycling bug that could cause the allocator to lose track of
whether a chunk was zeroed. On FreeBSD, NetBSD, and OS X, it could cause
corruption if allocating via sbrk(2) (unlikely unless running with the
"dss:primary" option specified). This was completely harmless on Linux
unless using mlockall(2) (and unlikely even then, unless the
--disable-munmap configure option or the "dss:primary" option was
specified). This regression was introduced in 3.1.0 by the
mlockall(2)/madvise(2) interaction fix.
- Fix TLS-related memory corruption that could occur during thread exit if the
thread never allocated memory. Only the quarantine and prof facilities were
susceptible.
- Fix two quarantine bugs:
+ Internal reallocation of the quarantined object array leaked the old
array.
+ Reallocation failure for internal reallocation of the quarantined object
array (very unlikely) resulted in memory corruption.
- Fix Valgrind integration to annotate all internally allocated memory in a
way that keeps Valgrind happy about internal data structure access.
- Fix building for s390 systems.
* 3.3.0 (January 23, 2013)
This version includes a few minor performance improvements in addition to the
listed new features and bug fixes.
New features:
- Add clipping support to lg_chunk option processing.
- Add the --enable-ivsalloc option.
- Add the --without-export option.
- Add the --disable-zone-allocator option.
Bug fixes:
- Fix "arenas.extend" mallctl to output the number of arenas.
- Fix chunk_recycle() to unconditionally inform Valgrind that returned memory
is undefined.
- Fix build break on FreeBSD related to alloca.h.
* 3.2.0 (November 9, 2012)
In addition to a couple of bug fixes, this version modifies page run
allocation and dirty page purging algorithms in order to better control
page-level virtual memory fragmentation.
Incompatible changes:
- Change the "opt.lg_dirty_mult" default from 5 to 3 (32:1 to 8:1).
Bug fixes:
- Fix dss/mmap allocation precedence code to use recyclable mmap memory only
after primary dss allocation fails.
- Fix deadlock in the "arenas.purge" mallctl. This regression was introduced
in 3.1.0 by the addition of the "arena.<i>.purge" mallctl.
* 3.1.0 (October 16, 2012)
New features:
- Auto-detect whether running inside Valgrind, thus removing the need to
manually specify MALLOC_CONF=valgrind:true.
- Add the "arenas.extend" mallctl, which allows applications to create
manually managed arenas.
- Add the ALLOCM_ARENA() flag for {,r,d}allocm().
- Add the "opt.dss", "arena.<i>.dss", and "stats.arenas.<i>.dss" mallctls,
which provide control over dss/mmap precedence.
- Add the "arena.<i>.purge" mallctl, which obsoletes "arenas.purge".
- Define LG_QUANTUM for hppa.
Incompatible changes:
- Disable tcache by default if running inside Valgrind, in order to avoid
making unallocated objects appear reachable to Valgrind.
- Drop const from malloc_usable_size() argument on Linux.
Bug fixes:
- Fix heap profiling crash if sampled object is freed via realloc(p, 0).
- Remove const from __*_hook variable declarations, so that glibc can modify
them during process forking.
- Fix mlockall(2)/madvise(2) interaction.
- Fix fork(2)-related deadlocks.
- Fix error return value for "thread.tcache.enabled" mallctl.
* 3.0.0 (May 11, 2012)
Although this version adds some major new features, the primary focus is on
internal code cleanup that facilitates maintainability and portability, most
of which is not reflected in the ChangeLog. This is the first release to
incorporate substantial contributions from numerous other developers, and the
result is a more broadly useful allocator (see the git revision history for
contribution details). Note that the license has been unified, thanks to
Facebook granting a license under the same terms as the other copyright
holders (see COPYING).
New features:
- Implement Valgrind support, redzones, and quarantine.
- Add support for additional platforms:
+ FreeBSD
+ Mac OS X Lion
+ MinGW
+ Windows (no support yet for replacing the system malloc)
- Add support for additional architectures:
+ MIPS
+ SH4
+ Tilera
- Add support for cross compiling.
- Add nallocm(), which rounds a request size up to the nearest size class
without actually allocating.
- Implement aligned_alloc() (blame C11).
- Add the "thread.tcache.enabled" mallctl.
- Add the "opt.prof_final" mallctl.
- Update pprof (from gperftools 2.0).
- Add the --with-mangling option.
- Add the --disable-experimental option.
- Add the --disable-munmap option, and make it the default on Linux.
- Add the --enable-mremap option, which disables use of mremap(2) by default.
Incompatible changes:
- Enable stats by default.
- Enable fill by default.
- Disable lazy locking by default.
- Rename the "tcache.flush" mallctl to "thread.tcache.flush".
- Rename the "arenas.pagesize" mallctl to "arenas.page".
- Change the "opt.lg_prof_sample" default from 0 to 19 (1 B to 512 KiB).
- Change the "opt.prof_accum" default from true to false.
Removed features:
- Remove the swap feature, including the "config.swap", "swap.avail",
"swap.prezeroed", "swap.nfds", and "swap.fds" mallctls.
- Remove highruns statistics, including the
"stats.arenas.<i>.bins.<j>.highruns" and
"stats.arenas.<i>.lruns.<j>.highruns" mallctls.
- As part of small size class refactoring, remove the "opt.lg_[qc]space_max",
"arenas.cacheline", "arenas.subpage", "arenas.[tqcs]space_{min,max}", and
"arenas.[tqcs]bins" mallctls.
- Remove the "arenas.chunksize" mallctl.
- Remove the "opt.lg_prof_tcmax" option.
- Remove the "opt.lg_prof_bt_max" option.
- Remove the "opt.lg_tcache_gc_sweep" option.
- Remove the --disable-tiny option, including the "config.tiny" mallctl.
- Remove the --enable-dynamic-page-shift configure option.
- Remove the --enable-sysv configure option.
Bug fixes:
- Fix a statistics-related bug in the "thread.arena" mallctl that could cause
invalid statistics and crashes.
- Work around TLS deallocation via free() on Linux. This bug could cause
write-after-free memory corruption.
- Fix a potential deadlock that could occur during interval- and
growth-triggered heap profile dumps.
- Fix large calloc() zeroing bugs due to dropping chunk map unzeroed flags.
- Fix chunk_alloc_dss() to stop claiming memory is zeroed. This bug could
cause memory corruption and crashes with --enable-dss specified.
- Fix fork-related bugs that could cause deadlock in children between fork
and exec.
- Fix malloc_stats_print() to honor 'b' and 'l' in the opts parameter.
- Fix realloc(p, 0) to act like free(p).
- Do not enforce minimum alignment in memalign().
- Check for NULL pointer in malloc_usable_size().
- Fix an off-by-one heap profile statistics bug that could be observed in
interval- and growth-triggered heap profiles.
- Fix the "epoch" mallctl to update cached stats even if the passed in epoch
is 0.
- Fix bin->runcur management to fix a layout policy bug. This bug did not
affect correctness.
- Fix a bug in choose_arena_hard() that potentially caused more arenas to be
initialized than necessary.
- Add missing "opt.lg_tcache_max" mallctl implementation.
- Use glibc allocator hooks to make mixed allocator usage less likely.
- Fix build issues for --disable-tcache.
- Don't mangle pthread_create() when --with-private-namespace is specified.
* 2.2.5 (November 14, 2011)
Bug fixes:
- Fix huge_ralloc() race when using mremap(2). This is a serious bug that
could cause memory corruption and/or crashes.
- Fix huge_ralloc() to maintain chunk statistics.
- Fix malloc_stats_print(..., "a") output.
* 2.2.4 (November 5, 2011)
Bug fixes:
- Initialize arenas_tsd before using it. This bug existed for 2.2.[0-3], as
well as for --disable-tls builds in earlier releases.
- Do not assume a 4 KiB page size in test/rallocm.c.
* 2.2.3 (August 31, 2011)
This version fixes numerous bugs related to heap profiling.
Bug fixes:
- Fix a prof-related race condition. This bug could cause memory corruption,
but only occurred in non-default configurations (prof_accum:false).
- Fix off-by-one backtracing issues (make sure that prof_alloc_prep() is
excluded from backtraces).
- Fix a prof-related bug in realloc() (only triggered by OOM errors).
- Fix prof-related bugs in allocm() and rallocm().
- Fix prof_tdata_cleanup() for --disable-tls builds.
- Fix a relative include path, to fix objdir builds.
* 2.2.2 (July 30, 2011)
Bug fixes:
- Fix a build error for --disable-tcache.
- Fix assertions in arena_purge() (for real this time).
- Add the --with-private-namespace option. This is a workaround for symbol
conflicts that can inadvertently arise when using static libraries.
* 2.2.1 (March 30, 2011)
Bug fixes:
- Implement atomic operations for x86/x64. This fixes compilation failures
for versions of gcc that are still in wide use.
- Fix an assertion in arena_purge().
* 2.2.0 (March 22, 2011)
This version incorporates several improvements to algorithms and data
structures that tend to reduce fragmentation and increase speed.
New features:
- Add the "stats.cactive" mallctl.
- Update pprof (from google-perftools 1.7).
- Improve backtracing-related configuration logic, and add the
--disable-prof-libgcc option.
Bug fixes:
- Change default symbol visibility from "internal", to "hidden", which
decreases the overhead of library-internal function calls.
- Fix symbol visibility so that it is also set on OS X.
- Fix a build dependency regression caused by the introduction of the .pic.o
suffix for PIC object files.
- Add missing checks for mutex initialization failures.
- Don't use libgcc-based backtracing except on x64, where it is known to work.
- Fix deadlocks on OS X that were due to memory allocation in
pthread_mutex_lock().
- Heap profiling-specific fixes:
+ Fix memory corruption due to integer overflow in small region index
computation, when using a small enough sample interval that profiling
context pointers are stored in small run headers.
+ Fix a bootstrap ordering bug that only occurred with TLS disabled.
+ Fix a rallocm() rsize bug.
+ Fix error detection bugs for aligned memory allocation.
* 2.1.3 (March 14, 2011)
Bug fixes:
- Fix a cpp logic regression (due to the "thread.{de,}allocatedp" mallctl fix
for OS X in 2.1.2).
- Fix a "thread.arena" mallctl bug.
- Fix a thread cache stats merging bug.
* 2.1.2 (March 2, 2011)
Bug fixes:
- Fix "thread.{de,}allocatedp" mallctl for OS X.
- Add missing jemalloc.a to build system.
* 2.1.1 (January 31, 2011)
Bug fixes:
- Fix aligned huge reallocation (affected allocm()).
- Fix the ALLOCM_LG_ALIGN macro definition.
- Fix a heap dumping deadlock.
- Fix a "thread.arena" mallctl bug.
* 2.1.0 (December 3, 2010)
This version incorporates some optimizations that can't quite be considered
bug fixes.
New features:
- Use Linux's mremap(2) for huge object reallocation when possible.
- Avoid locking in mallctl*() when possible.
- Add the "thread.[de]allocatedp" mallctl's.
- Convert the manual page source from roff to DocBook, and generate both roff
and HTML manuals.
Bug fixes:
- Fix a crash due to incorrect bootstrap ordering. This only impacted
--enable-debug --enable-dss configurations.
- Fix a minor statistics bug for mallctl("swap.avail", ...).
* 2.0.1 (October 29, 2010)
Bug fixes:
- Fix a race condition in heap profiling that could cause undefined behavior
if "opt.prof_accum" were disabled.
- Add missing mutex unlocks for some OOM error paths in the heap profiling
code.
- Fix a compilation error for non-C99 builds.
* 2.0.0 (October 24, 2010)
This version focuses on the experimental *allocm() API, and on improved
run-time configuration/introspection. Nonetheless, numerous performance
improvements are also included.
New features:
- Implement the experimental {,r,s,d}allocm() API, which provides a superset
of the functionality available via malloc(), calloc(), posix_memalign(),
realloc(), malloc_usable_size(), and free(). These functions can be used to
allocate/reallocate aligned zeroed memory, ask for optional extra memory
during reallocation, prevent object movement during reallocation, etc.
- Replace JEMALLOC_OPTIONS/JEMALLOC_PROF_PREFIX with MALLOC_CONF, which is
more human-readable, and more flexible. For example:
JEMALLOC_OPTIONS=AJP
is now:
MALLOC_CONF=abort:true,fill:true,stats_print:true
- Port to Apple OS X. Sponsored by Mozilla.
- Make it possible for the application to control thread-->arena mappings via
the "thread.arena" mallctl.
- Add compile-time support for all TLS-related functionality via pthreads TSD.
This is mainly of interest for OS X, which does not support TLS, but has a
TSD implementation with similar performance.
- Override memalign() and valloc() if they are provided by the system.
- Add the "arenas.purge" mallctl, which can be used to synchronously purge all
dirty unused pages.
- Make cumulative heap profiling data optional, so that it is possible to
limit the amount of memory consumed by heap profiling data structures.
- Add per thread allocation counters that can be accessed via the
"thread.allocated" and "thread.deallocated" mallctls.
Incompatible changes:
- Remove JEMALLOC_OPTIONS and malloc_options (see MALLOC_CONF above).
- Increase default backtrace depth from 4 to 128 for heap profiling.
- Disable interval-based profile dumps by default.
Bug fixes:
- Remove bad assertions in fork handler functions. These assertions could
cause aborts for some combinations of configure settings.
- Fix strerror_r() usage to deal with non-standard semantics in GNU libc.
- Fix leak context reporting. This bug tended to cause the number of contexts
to be underreported (though the reported number of objects and bytes were
correct).
- Fix a realloc() bug for large in-place growing reallocation. This bug could
cause memory corruption, but it was hard to trigger.
- Fix an allocation bug for small allocations that could be triggered if
multiple threads raced to create a new run of backing pages.
- Enhance the heap profiler to trigger samples based on usable size, rather
than request size.
- Fix a heap profiling bug due to sometimes losing track of requested object
size for sampled objects.
* 1.0.3 (August 12, 2010)
Bug fixes:
- Fix the libunwind-based implementation of stack backtracing (used for heap
profiling). This bug could cause zero-length backtraces to be reported.
- Add a missing mutex unlock in library initialization code. If multiple
threads raced to initialize malloc, some of them could end up permanently
blocked.
* 1.0.2 (May 11, 2010)
Bug fixes:
- Fix junk filling of large objects, which could cause memory corruption.
- Add MAP_NORESERVE support for chunk mapping, because otherwise virtual
memory limits could cause swap file configuration to fail. Contributed by
Jordan DeLong.
* 1.0.1 (April 14, 2010)
Bug fixes:
- Fix compilation when --enable-fill is specified.
- Fix threads-related profiling bugs that affected accuracy and caused memory
to be leaked during thread exit.
- Fix dirty page purging race conditions that could cause crashes.
- Fix crash in tcache flushing code during thread destruction.
* 1.0.0 (April 11, 2010)
This release focuses on speed and run-time introspection. Numerous
algorithmic improvements make this release substantially faster than its
predecessors.
New features:
- Implement autoconf-based configuration system.
- Add mallctl*(), for the purposes of introspection and run-time
configuration.
- Make it possible for the application to manually flush a thread's cache, via
the "tcache.flush" mallctl.
- Base maximum dirty page count on proportion of active memory.
- Compute various additional run-time statistics, including per size class
statistics for large objects.
- Expose malloc_stats_print(), which can be called repeatedly by the
application.
- Simplify the malloc_message() signature to only take one string argument,
and incorporate an opaque data pointer argument for use by the application
in combination with malloc_stats_print().
- Add support for allocation backed by one or more swap files, and allow the
application to disable over-commit if swap files are in use.
- Implement allocation profiling and leak checking.
Removed features:
- Remove the dynamic arena rebalancing code, since thread-specific caching
reduces its utility.
Bug fixes:
- Modify chunk allocation to work when address space layout randomization
(ASLR) is in use.
- Fix thread cleanup bugs related to TLS destruction.
- Handle 0-size allocation requests in posix_memalign().
- Fix a chunk leak. The leaked chunks were never touched, so this impacted
virtual memory usage, but not physical memory usage.
* linux_2008082[78]a (August 27/28, 2008)
These snapshot releases are the simple result of incorporating Linux-specific
support into the FreeBSD malloc sources.
--------------------------------------------------------------------------------
vim:filetype=text:textwidth=80

View File

@ -1,402 +0,0 @@
Building and installing a packaged release of jemalloc can be as simple as
typing the following while in the root directory of the source tree:
./configure
make
make install
If building from unpackaged developer sources, the simplest command sequence
that might work is:
./autogen.sh
make dist
make
make install
Note that documentation is not built by the default target because doing so
would create a dependency on xsltproc in packaged releases, hence the
requirement to either run 'make dist' or avoid installing docs via the various
install_* targets documented below.
=== Advanced configuration =====================================================
The 'configure' script supports numerous options that allow control of which
functionality is enabled, where jemalloc is installed, etc. Optionally, pass
any of the following arguments (not a definitive list) to 'configure':
--help
Print a definitive list of options.
--prefix=<install-root-dir>
Set the base directory in which to install. For example:
./configure --prefix=/usr/local
will cause files to be installed into /usr/local/include, /usr/local/lib,
and /usr/local/man.
--with-rpath=<colon-separated-rpath>
Embed one or more library paths, so that libjemalloc can find the libraries
it is linked to. This works only on ELF-based systems.
--with-mangling=<map>
Mangle public symbols specified in <map> which is a comma-separated list of
name:mangled pairs.
For example, to use ld's --wrap option as an alternative method for
overriding libc's malloc implementation, specify something like:
--with-mangling=malloc:__wrap_malloc,free:__wrap_free[...]
Note that mangling happens prior to application of the prefix specified by
--with-jemalloc-prefix, and mangled symbols are then ignored when applying
the prefix.
--with-jemalloc-prefix=<prefix>
Prefix all public APIs with <prefix>. For example, if <prefix> is
"prefix_", API changes like the following occur:
malloc() --> prefix_malloc()
malloc_conf --> prefix_malloc_conf
/etc/malloc.conf --> /etc/prefix_malloc.conf
MALLOC_CONF --> PREFIX_MALLOC_CONF
This makes it possible to use jemalloc at the same time as the system
allocator, or even to use multiple copies of jemalloc simultaneously.
By default, the prefix is "", except on OS X, where it is "je_". On OS X,
jemalloc overlays the default malloc zone, but makes no attempt to actually
replace the "malloc", "calloc", etc. symbols.
--without-export
Don't export public APIs. This can be useful when building jemalloc as a
static library, or to avoid exporting public APIs when using the zone
allocator on OSX.
--with-private-namespace=<prefix>
Prefix all library-private APIs with <prefix>je_. For shared libraries,
symbol visibility mechanisms prevent these symbols from being exported, but
for static libraries, naming collisions are a real possibility. By
default, <prefix> is empty, which results in a symbol prefix of je_ .
--with-install-suffix=<suffix>
Append <suffix> to the base name of all installed files, such that multiple
versions of jemalloc can coexist in the same installation directory. For
example, libjemalloc.so.0 becomes libjemalloc<suffix>.so.0.
--disable-cc-silence
Disable code that silences non-useful compiler warnings. This is mainly
useful during development when auditing the set of warnings that are being
silenced.
--enable-debug
Enable assertions and validation code. This incurs a substantial
performance hit, but is very useful during application development.
Implies --enable-ivsalloc.
--enable-code-coverage
Enable code coverage support, for use during jemalloc test development.
Additional testing targets are available if this option is enabled:
coverage
coverage_unit
coverage_integration
coverage_stress
These targets do not clear code coverage results from previous runs, and
there are interactions between the various coverage targets, so it is
usually advisable to run 'make clean' between repeated code coverage runs.
--disable-stats
Disable statistics gathering functionality. See the "opt.stats_print"
option documentation for usage details.
--enable-ivsalloc
Enable validation code, which verifies that pointers reside within
jemalloc-owned chunks before dereferencing them. This incurs a minor
performance hit.
--enable-prof
Enable heap profiling and leak detection functionality. See the "opt.prof"
option documentation for usage details. When enabled, there are several
approaches to backtracing, and the configure script chooses the first one
in the following list that appears to function correctly:
+ libunwind (requires --enable-prof-libunwind)
+ libgcc (unless --disable-prof-libgcc)
+ gcc intrinsics (unless --disable-prof-gcc)
--enable-prof-libunwind
Use the libunwind library (http://www.nongnu.org/libunwind/) for stack
backtracing.
--disable-prof-libgcc
Disable the use of libgcc's backtracing functionality.
--disable-prof-gcc
Disable the use of gcc intrinsics for backtracing.
--with-static-libunwind=<libunwind.a>
Statically link against the specified libunwind.a rather than dynamically
linking with -lunwind.
--disable-tcache
Disable thread-specific caches for small objects. Objects are cached and
released in bulk, thus reducing the total number of mutex operations. See
the "opt.tcache" option for usage details.
--disable-munmap
Disable virtual memory deallocation via munmap(2); instead keep track of
the virtual memory for later use. munmap() is disabled by default (i.e.
--disable-munmap is implied) on Linux, which has a quirk in its virtual
memory allocation algorithm that causes semi-permanent VM map holes under
normal jemalloc operation.
--disable-fill
Disable support for junk/zero filling of memory, quarantine, and redzones.
See the "opt.junk", "opt.zero", "opt.quarantine", and "opt.redzone" option
documentation for usage details.
--disable-valgrind
Disable support for Valgrind.
--disable-zone-allocator
Disable zone allocator for Darwin. This means jemalloc won't be hooked as
the default allocator on OSX/iOS.
--enable-utrace
Enable utrace(2)-based allocation tracing. This feature is not broadly
portable (FreeBSD has it, but Linux and OS X do not).
--enable-xmalloc
Enable support for optional immediate termination due to out-of-memory
errors, as is commonly implemented by "xmalloc" wrapper function for malloc.
See the "opt.xmalloc" option documentation for usage details.
--enable-lazy-lock
Enable code that wraps pthread_create() to detect when an application
switches from single-threaded to multi-threaded mode, so that it can avoid
mutex locking/unlocking operations while in single-threaded mode. In
practice, this feature usually has little impact on performance unless
thread-specific caching is disabled.
--disable-tls
Disable thread-local storage (TLS), which allows for fast access to
thread-local variables via the __thread keyword. If TLS is available,
jemalloc uses it for several purposes.
--disable-cache-oblivious
Disable cache-oblivious large allocation alignment for large allocation
requests with no alignment constraints. If this feature is disabled, all
large allocations are page-aligned as an implementation artifact, which can
severely harm CPU cache utilization. However, the cache-oblivious layout
comes at the cost of one extra page per large allocation, which in the
most extreme case increases physical memory usage for the 16 KiB size class
to 20 KiB.
--with-xslroot=<path>
Specify where to find DocBook XSL stylesheets when building the
documentation.
--with-lg-page=<lg-page>
Specify the base 2 log of the system page size. This option is only useful
when cross compiling, since the configure script automatically determines
the host's page size by default.
--with-lg-page-sizes=<lg-page-sizes>
Specify the comma-separated base 2 logs of the page sizes to support. This
option may be useful when cross-compiling in combination with
--with-lg-page, but its primary use case is for integration with FreeBSD's
libc, wherein jemalloc is embedded.
--with-lg-size-class-group=<lg-size-class-group>
Specify the base 2 log of how many size classes to use for each doubling in
size. By default jemalloc uses <lg-size-class-group>=2, which results in
e.g. the following size classes:
[...], 64,
80, 96, 112, 128,
160, [...]
<lg-size-class-group>=3 results in e.g. the following size classes:
[...], 64,
72, 80, 88, 96, 104, 112, 120, 128,
144, [...]
The minimal <lg-size-class-group>=0 causes jemalloc to only provide size
classes that are powers of 2:
[...],
64,
128,
256,
[...]
An implementation detail currently limits the total number of small size
classes to 255, and a compilation error will result if the
<lg-size-class-group> you specify cannot be supported. The limit is
roughly <lg-size-class-group>=4, depending on page size.
--with-lg-quantum=<lg-quantum>
Specify the base 2 log of the minimum allocation alignment. jemalloc needs
to know the minimum alignment that meets the following C standard
requirement (quoted from the April 12, 2011 draft of the C11 standard):
The pointer returned if the allocation succeeds is suitably aligned so
that it may be assigned to a pointer to any type of object with a
fundamental alignment requirement and then used to access such an object
or an array of such objects in the space allocated [...]
This setting is architecture-specific, and although jemalloc includes known
safe values for the most commonly used modern architectures, there is a
wrinkle related to GNU libc (glibc) that may impact your choice of
<lg-quantum>. On most modern architectures, this mandates 16-byte alignment
(<lg-quantum>=4), but the glibc developers chose not to meet this
requirement for performance reasons. An old discussion can be found at
https://sourceware.org/bugzilla/show_bug.cgi?id=206 . Unlike glibc,
jemalloc does follow the C standard by default (caveat: jemalloc
technically cheats if --with-lg-tiny-min is smaller than
--with-lg-quantum), but the fact that Linux systems already work around
this allocator noncompliance means that it is generally safe in practice to
let jemalloc's minimum alignment follow glibc's lead. If you specify
--with-lg-quantum=3 during configuration, jemalloc will provide additional
size classes that are not 16-byte-aligned (24, 40, and 56, assuming
--with-lg-size-class-group=2).
--with-lg-tiny-min=<lg-tiny-min>
Specify the base 2 log of the minimum tiny size class to support. Tiny
size classes are powers of 2 less than the quantum, and are only
incorporated if <lg-tiny-min> is less than <lg-quantum> (see
--with-lg-quantum). Tiny size classes technically violate the C standard
requirement for minimum alignment, and crashes could conceivably result if
the compiler were to generate instructions that made alignment assumptions,
both because illegal instruction traps could result, and because accesses
could straddle page boundaries and cause segmentation faults due to
accessing unmapped addresses.
The default of <lg-tiny-min>=3 works well in practice even on architectures
that technically require 16-byte alignment, probably for the same reason
--with-lg-quantum=3 works. Smaller tiny size classes can, and will, cause
crashes (see https://bugzilla.mozilla.org/show_bug.cgi?id=691003 for an
example).
This option is rarely useful, and is mainly provided as documentation of a
subtle implementation detail. If you do use this option, specify a
value in [3, ..., <lg-quantum>].
The following environment variables (not a definitive list) impact configure's
behavior:
CFLAGS="?"
Pass these flags to the compiler. You probably shouldn't define this unless
you know what you are doing. (Use EXTRA_CFLAGS instead.)
EXTRA_CFLAGS="?"
Append these flags to CFLAGS. This makes it possible to add flags such as
-Werror, while allowing the configure script to determine what other flags
are appropriate for the specified configuration.
The configure script specifically checks whether an optimization flag (-O*)
is specified in EXTRA_CFLAGS, and refrains from specifying an optimization
level if it finds that one has already been specified.
CPPFLAGS="?"
Pass these flags to the C preprocessor. Note that CFLAGS is not passed to
'cpp' when 'configure' is looking for include files, so you must use
CPPFLAGS instead if you need to help 'configure' find header files.
LD_LIBRARY_PATH="?"
'ld' uses this colon-separated list to find libraries.
LDFLAGS="?"
Pass these flags when linking.
PATH="?"
'configure' uses this to find programs.
=== Advanced compilation =======================================================
To build only parts of jemalloc, use the following targets:
build_lib_shared
build_lib_static
build_lib
build_doc_html
build_doc_man
build_doc
To install only parts of jemalloc, use the following targets:
install_bin
install_include
install_lib_shared
install_lib_static
install_lib
install_doc_html
install_doc_man
install_doc
To clean up build results to varying degrees, use the following make targets:
clean
distclean
relclean
=== Advanced installation ======================================================
Optionally, define make variables when invoking make, including (not
exclusively):
INCLUDEDIR="?"
Use this as the installation prefix for header files.
LIBDIR="?"
Use this as the installation prefix for libraries.
MANDIR="?"
Use this as the installation prefix for man pages.
DESTDIR="?"
Prepend DESTDIR to INCLUDEDIR, LIBDIR, DATADIR, and MANDIR. This is useful
when installing to a different path than was specified via --prefix.
CC="?"
Use this to invoke the C compiler.
CFLAGS="?"
Pass these flags to the compiler.
CPPFLAGS="?"
Pass these flags to the C preprocessor.
LDFLAGS="?"
Pass these flags when linking.
PATH="?"
Use this to search for programs used during configuration and building.
=== Development ================================================================
If you intend to make non-trivial changes to jemalloc, use the 'autogen.sh'
script rather than 'configure'. This re-generates 'configure', enables
configuration dependency rules, and enables re-generation of automatically
generated source files.
The build system supports using an object directory separate from the source
tree. For example, you can create an 'obj' directory, and from within that
directory, issue configuration and build commands:
autoconf
mkdir obj
cd obj
../configure --enable-autogen
make
=== Documentation ==============================================================
The manual page is generated in both html and roff formats. Any web browser
can be used to view the html manual. The roff manual page can be formatted
prior to installation via the following command:
nroff -man -t doc/jemalloc.3

View File

@ -1,458 +0,0 @@
# Clear out all vpaths, then set just one (default vpath) for the main build
# directory.
vpath
vpath % .
# Clear the default suffixes, so that built-in rules are not used.
.SUFFIXES :
SHELL := /bin/sh
CC := @CC@
# Configuration parameters.
DESTDIR =
BINDIR := $(DESTDIR)@BINDIR@
INCLUDEDIR := $(DESTDIR)@INCLUDEDIR@
LIBDIR := $(DESTDIR)@LIBDIR@
DATADIR := $(DESTDIR)@DATADIR@
MANDIR := $(DESTDIR)@MANDIR@
srcroot := @srcroot@
objroot := @objroot@
abs_srcroot := @abs_srcroot@
abs_objroot := @abs_objroot@
# Build parameters.
CPPFLAGS := @CPPFLAGS@ -I$(srcroot)include -I$(objroot)include
CFLAGS := @CFLAGS@
LDFLAGS := @LDFLAGS@
EXTRA_LDFLAGS := @EXTRA_LDFLAGS@
LIBS := @LIBS@
TESTLIBS := @TESTLIBS@
RPATH_EXTRA := @RPATH_EXTRA@
SO := @so@
IMPORTLIB := @importlib@
O := @o@
A := @a@
EXE := @exe@
LIBPREFIX := @libprefix@
REV := @rev@
install_suffix := @install_suffix@
ABI := @abi@
XSLTPROC := @XSLTPROC@
AUTOCONF := @AUTOCONF@
_RPATH = @RPATH@
RPATH = $(if $(1),$(call _RPATH,$(1)))
cfghdrs_in := $(addprefix $(srcroot),@cfghdrs_in@)
cfghdrs_out := @cfghdrs_out@
cfgoutputs_in := $(addprefix $(srcroot),@cfgoutputs_in@)
cfgoutputs_out := @cfgoutputs_out@
enable_autogen := @enable_autogen@
enable_code_coverage := @enable_code_coverage@
enable_prof := @enable_prof@
enable_valgrind := @enable_valgrind@
enable_zone_allocator := @enable_zone_allocator@
MALLOC_CONF := @JEMALLOC_CPREFIX@MALLOC_CONF
DSO_LDFLAGS = @DSO_LDFLAGS@
SOREV = @SOREV@
PIC_CFLAGS = @PIC_CFLAGS@
CTARGET = @CTARGET@
LDTARGET = @LDTARGET@
MKLIB = @MKLIB@
AR = @AR@
ARFLAGS = @ARFLAGS@
CC_MM = @CC_MM@
ifeq (macho, $(ABI))
TEST_LIBRARY_PATH := DYLD_FALLBACK_LIBRARY_PATH="$(objroot)lib"
else
ifeq (pecoff, $(ABI))
TEST_LIBRARY_PATH := PATH="$(PATH):$(objroot)lib"
else
TEST_LIBRARY_PATH :=
endif
endif
LIBJEMALLOC := $(LIBPREFIX)jemalloc$(install_suffix)
# Lists of files.
BINS := $(objroot)bin/jemalloc-config $(objroot)bin/jemalloc.sh $(objroot)bin/jeprof
C_HDRS := $(objroot)include/jemalloc/jemalloc$(install_suffix).h
C_SRCS := $(srcroot)src/jemalloc.c $(srcroot)src/arena.c \
$(srcroot)src/atomic.c $(srcroot)src/base.c $(srcroot)src/bitmap.c \
$(srcroot)src/chunk.c $(srcroot)src/chunk_dss.c \
$(srcroot)src/chunk_mmap.c $(srcroot)src/ckh.c $(srcroot)src/ctl.c \
$(srcroot)src/extent.c $(srcroot)src/hash.c $(srcroot)src/huge.c \
$(srcroot)src/mb.c $(srcroot)src/mutex.c $(srcroot)src/pages.c \
$(srcroot)src/prof.c $(srcroot)src/quarantine.c $(srcroot)src/rtree.c \
$(srcroot)src/stats.c $(srcroot)src/tcache.c $(srcroot)src/util.c \
$(srcroot)src/tsd.c
ifeq ($(enable_valgrind), 1)
C_SRCS += $(srcroot)src/valgrind.c
endif
ifeq ($(enable_zone_allocator), 1)
C_SRCS += $(srcroot)src/zone.c
endif
ifeq ($(IMPORTLIB),$(SO))
STATIC_LIBS := $(objroot)lib/$(LIBJEMALLOC).$(A)
endif
ifdef PIC_CFLAGS
STATIC_LIBS += $(objroot)lib/$(LIBJEMALLOC)_pic.$(A)
else
STATIC_LIBS += $(objroot)lib/$(LIBJEMALLOC)_s.$(A)
endif
DSOS := $(objroot)lib/$(LIBJEMALLOC).$(SOREV)
ifneq ($(SOREV),$(SO))
DSOS += $(objroot)lib/$(LIBJEMALLOC).$(SO)
endif
PC := $(objroot)jemalloc.pc
MAN3 := $(objroot)doc/jemalloc$(install_suffix).3
DOCS_XML := $(objroot)doc/jemalloc$(install_suffix).xml
DOCS_HTML := $(DOCS_XML:$(objroot)%.xml=$(objroot)%.html)
DOCS_MAN3 := $(DOCS_XML:$(objroot)%.xml=$(objroot)%.3)
DOCS := $(DOCS_HTML) $(DOCS_MAN3)
C_TESTLIB_SRCS := $(srcroot)test/src/btalloc.c $(srcroot)test/src/btalloc_0.c \
$(srcroot)test/src/btalloc_1.c $(srcroot)test/src/math.c \
$(srcroot)test/src/mtx.c $(srcroot)test/src/mq.c \
$(srcroot)test/src/SFMT.c $(srcroot)test/src/test.c \
$(srcroot)test/src/thd.c $(srcroot)test/src/timer.c
C_UTIL_INTEGRATION_SRCS := $(srcroot)src/util.c
TESTS_UNIT := $(srcroot)test/unit/atomic.c \
$(srcroot)test/unit/bitmap.c \
$(srcroot)test/unit/ckh.c \
$(srcroot)test/unit/hash.c \
$(srcroot)test/unit/junk.c \
$(srcroot)test/unit/junk_alloc.c \
$(srcroot)test/unit/junk_free.c \
$(srcroot)test/unit/lg_chunk.c \
$(srcroot)test/unit/mallctl.c \
$(srcroot)test/unit/math.c \
$(srcroot)test/unit/mq.c \
$(srcroot)test/unit/mtx.c \
$(srcroot)test/unit/prof_accum.c \
$(srcroot)test/unit/prof_active.c \
$(srcroot)test/unit/prof_gdump.c \
$(srcroot)test/unit/prof_idump.c \
$(srcroot)test/unit/prof_reset.c \
$(srcroot)test/unit/prof_thread_name.c \
$(srcroot)test/unit/ql.c \
$(srcroot)test/unit/qr.c \
$(srcroot)test/unit/quarantine.c \
$(srcroot)test/unit/rb.c \
$(srcroot)test/unit/rtree.c \
$(srcroot)test/unit/SFMT.c \
$(srcroot)test/unit/size_classes.c \
$(srcroot)test/unit/stats.c \
$(srcroot)test/unit/tsd.c \
$(srcroot)test/unit/util.c \
$(srcroot)test/unit/zero.c
TESTS_INTEGRATION := $(srcroot)test/integration/aligned_alloc.c \
$(srcroot)test/integration/allocated.c \
$(srcroot)test/integration/sdallocx.c \
$(srcroot)test/integration/mallocx.c \
$(srcroot)test/integration/MALLOCX_ARENA.c \
$(srcroot)test/integration/overflow.c \
$(srcroot)test/integration/posix_memalign.c \
$(srcroot)test/integration/rallocx.c \
$(srcroot)test/integration/thread_arena.c \
$(srcroot)test/integration/thread_tcache_enabled.c \
$(srcroot)test/integration/xallocx.c \
$(srcroot)test/integration/chunk.c
TESTS_STRESS := $(srcroot)test/stress/microbench.c
TESTS := $(TESTS_UNIT) $(TESTS_INTEGRATION) $(TESTS_STRESS)
C_OBJS := $(C_SRCS:$(srcroot)%.c=$(objroot)%.$(O))
C_PIC_OBJS := $(C_SRCS:$(srcroot)%.c=$(objroot)%.pic.$(O))
C_JET_OBJS := $(C_SRCS:$(srcroot)%.c=$(objroot)%.jet.$(O))
C_TESTLIB_UNIT_OBJS := $(C_TESTLIB_SRCS:$(srcroot)%.c=$(objroot)%.unit.$(O))
C_TESTLIB_INTEGRATION_OBJS := $(C_TESTLIB_SRCS:$(srcroot)%.c=$(objroot)%.integration.$(O))
C_UTIL_INTEGRATION_OBJS := $(C_UTIL_INTEGRATION_SRCS:$(srcroot)%.c=$(objroot)%.integration.$(O))
C_TESTLIB_STRESS_OBJS := $(C_TESTLIB_SRCS:$(srcroot)%.c=$(objroot)%.stress.$(O))
C_TESTLIB_OBJS := $(C_TESTLIB_UNIT_OBJS) $(C_TESTLIB_INTEGRATION_OBJS) $(C_UTIL_INTEGRATION_OBJS) $(C_TESTLIB_STRESS_OBJS)
TESTS_UNIT_OBJS := $(TESTS_UNIT:$(srcroot)%.c=$(objroot)%.$(O))
TESTS_INTEGRATION_OBJS := $(TESTS_INTEGRATION:$(srcroot)%.c=$(objroot)%.$(O))
TESTS_STRESS_OBJS := $(TESTS_STRESS:$(srcroot)%.c=$(objroot)%.$(O))
TESTS_OBJS := $(TESTS_UNIT_OBJS) $(TESTS_INTEGRATION_OBJS) $(TESTS_STRESS_OBJS)
.PHONY: all dist build_doc_html build_doc_man build_doc
.PHONY: install_bin install_include install_lib
.PHONY: install_doc_html install_doc_man install_doc install
.PHONY: tests check clean distclean relclean
.SECONDARY : $(TESTS_OBJS)
# Default target.
all: build_lib
dist: build_doc
$(objroot)doc/%.html : $(objroot)doc/%.xml $(srcroot)doc/stylesheet.xsl $(objroot)doc/html.xsl
$(XSLTPROC) -o $@ $(objroot)doc/html.xsl $<
$(objroot)doc/%.3 : $(objroot)doc/%.xml $(srcroot)doc/stylesheet.xsl $(objroot)doc/manpages.xsl
$(XSLTPROC) -o $@ $(objroot)doc/manpages.xsl $<
build_doc_html: $(DOCS_HTML)
build_doc_man: $(DOCS_MAN3)
build_doc: $(DOCS)
#
# Include generated dependency files.
#
ifdef CC_MM
-include $(C_OBJS:%.$(O)=%.d)
-include $(C_PIC_OBJS:%.$(O)=%.d)
-include $(C_JET_OBJS:%.$(O)=%.d)
-include $(C_TESTLIB_OBJS:%.$(O)=%.d)
-include $(TESTS_OBJS:%.$(O)=%.d)
endif
$(C_OBJS): $(objroot)src/%.$(O): $(srcroot)src/%.c
$(C_PIC_OBJS): $(objroot)src/%.pic.$(O): $(srcroot)src/%.c
$(C_PIC_OBJS): CFLAGS += $(PIC_CFLAGS)
$(C_JET_OBJS): $(objroot)src/%.jet.$(O): $(srcroot)src/%.c
$(C_JET_OBJS): CFLAGS += -DJEMALLOC_JET
$(C_TESTLIB_UNIT_OBJS): $(objroot)test/src/%.unit.$(O): $(srcroot)test/src/%.c
$(C_TESTLIB_UNIT_OBJS): CPPFLAGS += -DJEMALLOC_UNIT_TEST
$(C_TESTLIB_INTEGRATION_OBJS): $(objroot)test/src/%.integration.$(O): $(srcroot)test/src/%.c
$(C_TESTLIB_INTEGRATION_OBJS): CPPFLAGS += -DJEMALLOC_INTEGRATION_TEST
$(C_UTIL_INTEGRATION_OBJS): $(objroot)src/%.integration.$(O): $(srcroot)src/%.c
$(C_TESTLIB_STRESS_OBJS): $(objroot)test/src/%.stress.$(O): $(srcroot)test/src/%.c
$(C_TESTLIB_STRESS_OBJS): CPPFLAGS += -DJEMALLOC_STRESS_TEST -DJEMALLOC_STRESS_TESTLIB
$(C_TESTLIB_OBJS): CPPFLAGS += -I$(srcroot)test/include -I$(objroot)test/include
$(TESTS_UNIT_OBJS): CPPFLAGS += -DJEMALLOC_UNIT_TEST
$(TESTS_INTEGRATION_OBJS): CPPFLAGS += -DJEMALLOC_INTEGRATION_TEST
$(TESTS_STRESS_OBJS): CPPFLAGS += -DJEMALLOC_STRESS_TEST
$(TESTS_OBJS): $(objroot)test/%.$(O): $(srcroot)test/%.c
$(TESTS_OBJS): CPPFLAGS += -I$(srcroot)test/include -I$(objroot)test/include
ifneq ($(IMPORTLIB),$(SO))
$(C_OBJS) $(C_JET_OBJS): CPPFLAGS += -DDLLEXPORT
endif
ifndef CC_MM
# Dependencies.
HEADER_DIRS = $(srcroot)include/jemalloc/internal \
$(objroot)include/jemalloc $(objroot)include/jemalloc/internal
HEADERS = $(wildcard $(foreach dir,$(HEADER_DIRS),$(dir)/*.h))
$(C_OBJS) $(C_PIC_OBJS) $(C_JET_OBJS) $(C_TESTLIB_OBJS) $(TESTS_OBJS): $(HEADERS)
$(TESTS_OBJS): $(objroot)test/include/test/jemalloc_test.h
endif
$(C_OBJS) $(C_PIC_OBJS) $(C_JET_OBJS) $(C_TESTLIB_OBJS) $(TESTS_OBJS): %.$(O):
@mkdir -p $(@D)
$(CC) $(CFLAGS) -c $(CPPFLAGS) $(CTARGET) $<
ifdef CC_MM
@$(CC) -MM $(CPPFLAGS) -MT $@ -o $(@:%.$(O)=%.d) $<
endif
ifneq ($(SOREV),$(SO))
%.$(SO) : %.$(SOREV)
@mkdir -p $(@D)
ln -sf $(<F) $@
endif
$(objroot)lib/$(LIBJEMALLOC).$(SOREV) : $(if $(PIC_CFLAGS),$(C_PIC_OBJS),$(C_OBJS))
@mkdir -p $(@D)
$(CC) $(DSO_LDFLAGS) $(call RPATH,$(RPATH_EXTRA)) $(LDTARGET) $+ $(LDFLAGS) $(LIBS) $(EXTRA_LDFLAGS)
$(objroot)lib/$(LIBJEMALLOC)_pic.$(A) : $(C_PIC_OBJS)
$(objroot)lib/$(LIBJEMALLOC).$(A) : $(C_OBJS)
$(objroot)lib/$(LIBJEMALLOC)_s.$(A) : $(C_OBJS)
$(STATIC_LIBS):
@mkdir -p $(@D)
$(AR) $(ARFLAGS)@AROUT@ $+
$(objroot)test/unit/%$(EXE): $(objroot)test/unit/%.$(O) $(TESTS_UNIT_LINK_OBJS) $(C_JET_OBJS) $(C_TESTLIB_UNIT_OBJS)
@mkdir -p $(@D)
$(CC) $(LDTARGET) $(filter %.$(O),$^) $(call RPATH,$(objroot)lib) $(LDFLAGS) $(filter-out -lm,$(LIBS)) -lm $(TESTLIBS) $(EXTRA_LDFLAGS)
$(objroot)test/integration/%$(EXE): $(objroot)test/integration/%.$(O) $(C_TESTLIB_INTEGRATION_OBJS) $(C_UTIL_INTEGRATION_OBJS) $(objroot)lib/$(LIBJEMALLOC).$(IMPORTLIB)
@mkdir -p $(@D)
$(CC) $(LDTARGET) $(filter %.$(O),$^) $(call RPATH,$(objroot)lib) $(objroot)lib/$(LIBJEMALLOC).$(IMPORTLIB) $(LDFLAGS) $(filter-out -lm,$(filter -lpthread,$(LIBS))) -lm $(TESTLIBS) $(EXTRA_LDFLAGS)
$(objroot)test/stress/%$(EXE): $(objroot)test/stress/%.$(O) $(C_JET_OBJS) $(C_TESTLIB_STRESS_OBJS) $(objroot)lib/$(LIBJEMALLOC).$(IMPORTLIB)
@mkdir -p $(@D)
$(CC) $(LDTARGET) $(filter %.$(O),$^) $(call RPATH,$(objroot)lib) $(objroot)lib/$(LIBJEMALLOC).$(IMPORTLIB) $(LDFLAGS) $(filter-out -lm,$(LIBS)) -lm $(TESTLIBS) $(EXTRA_LDFLAGS)
build_lib_shared: $(DSOS)
build_lib_static: $(STATIC_LIBS)
build_lib: build_lib_shared build_lib_static
install_bin:
install -d $(BINDIR)
@for b in $(BINS); do \
echo "install -m 755 $$b $(BINDIR)"; \
install -m 755 $$b $(BINDIR); \
done
install_include:
install -d $(INCLUDEDIR)/jemalloc
@for h in $(C_HDRS); do \
echo "install -m 644 $$h $(INCLUDEDIR)/jemalloc"; \
install -m 644 $$h $(INCLUDEDIR)/jemalloc; \
done
install_lib_shared: $(DSOS)
install -d $(LIBDIR)
install -m 755 $(objroot)lib/$(LIBJEMALLOC).$(SOREV) $(LIBDIR)
ifneq ($(SOREV),$(SO))
ln -sf $(LIBJEMALLOC).$(SOREV) $(LIBDIR)/$(LIBJEMALLOC).$(SO)
endif
install_lib_static: $(STATIC_LIBS)
install -d $(LIBDIR)
@for l in $(STATIC_LIBS); do \
echo "install -m 755 $$l $(LIBDIR)"; \
install -m 755 $$l $(LIBDIR); \
done
install_lib_pc: $(PC)
install -d $(LIBDIR)/pkgconfig
@for l in $(PC); do \
echo "install -m 644 $$l $(LIBDIR)/pkgconfig"; \
install -m 644 $$l $(LIBDIR)/pkgconfig; \
done
install_lib: install_lib_shared install_lib_static install_lib_pc
install_doc_html:
install -d $(DATADIR)/doc/jemalloc$(install_suffix)
@for d in $(DOCS_HTML); do \
echo "install -m 644 $$d $(DATADIR)/doc/jemalloc$(install_suffix)"; \
install -m 644 $$d $(DATADIR)/doc/jemalloc$(install_suffix); \
done
install_doc_man:
install -d $(MANDIR)/man3
@for d in $(DOCS_MAN3); do \
echo "install -m 644 $$d $(MANDIR)/man3"; \
install -m 644 $$d $(MANDIR)/man3; \
done
install_doc: install_doc_html install_doc_man
install: install_bin install_include install_lib install_doc
tests_unit: $(TESTS_UNIT:$(srcroot)%.c=$(objroot)%$(EXE))
tests_integration: $(TESTS_INTEGRATION:$(srcroot)%.c=$(objroot)%$(EXE))
tests_stress: $(TESTS_STRESS:$(srcroot)%.c=$(objroot)%$(EXE))
tests: tests_unit tests_integration tests_stress
check_unit_dir:
@mkdir -p $(objroot)test/unit
check_integration_dir:
@mkdir -p $(objroot)test/integration
stress_dir:
@mkdir -p $(objroot)test/stress
check_dir: check_unit_dir check_integration_dir
check_unit: tests_unit check_unit_dir
$(SHELL) $(objroot)test/test.sh $(TESTS_UNIT:$(srcroot)%.c=$(objroot)%)
check_integration_prof: tests_integration check_integration_dir
ifeq ($(enable_prof), 1)
$(MALLOC_CONF)="prof:true" $(SHELL) $(objroot)test/test.sh $(TESTS_INTEGRATION:$(srcroot)%.c=$(objroot)%)
$(MALLOC_CONF)="prof:true,prof_active:false" $(SHELL) $(objroot)test/test.sh $(TESTS_INTEGRATION:$(srcroot)%.c=$(objroot)%)
endif
check_integration: tests_integration check_integration_dir
$(SHELL) $(objroot)test/test.sh $(TESTS_INTEGRATION:$(srcroot)%.c=$(objroot)%)
stress: tests_stress stress_dir
$(SHELL) $(objroot)test/test.sh $(TESTS_STRESS:$(srcroot)%.c=$(objroot)%)
check: tests check_dir check_integration_prof
$(SHELL) $(objroot)test/test.sh $(TESTS_UNIT:$(srcroot)%.c=$(objroot)%) $(TESTS_INTEGRATION:$(srcroot)%.c=$(objroot)%)
ifeq ($(enable_code_coverage), 1)
coverage_unit: check_unit
$(SHELL) $(srcroot)coverage.sh $(srcroot)src jet $(C_JET_OBJS)
$(SHELL) $(srcroot)coverage.sh $(srcroot)test/src unit $(C_TESTLIB_UNIT_OBJS)
$(SHELL) $(srcroot)coverage.sh $(srcroot)test/unit unit $(TESTS_UNIT_OBJS)
coverage_integration: check_integration
$(SHELL) $(srcroot)coverage.sh $(srcroot)src pic $(C_PIC_OBJS)
$(SHELL) $(srcroot)coverage.sh $(srcroot)src integration $(C_UTIL_INTEGRATION_OBJS)
$(SHELL) $(srcroot)coverage.sh $(srcroot)test/src integration $(C_TESTLIB_INTEGRATION_OBJS)
$(SHELL) $(srcroot)coverage.sh $(srcroot)test/integration integration $(TESTS_INTEGRATION_OBJS)
coverage_stress: stress
$(SHELL) $(srcroot)coverage.sh $(srcroot)src pic $(C_PIC_OBJS)
$(SHELL) $(srcroot)coverage.sh $(srcroot)src jet $(C_JET_OBJS)
$(SHELL) $(srcroot)coverage.sh $(srcroot)test/src stress $(C_TESTLIB_STRESS_OBJS)
$(SHELL) $(srcroot)coverage.sh $(srcroot)test/stress stress $(TESTS_STRESS_OBJS)
coverage: check
$(SHELL) $(srcroot)coverage.sh $(srcroot)src pic $(C_PIC_OBJS)
$(SHELL) $(srcroot)coverage.sh $(srcroot)src jet $(C_JET_OBJS)
$(SHELL) $(srcroot)coverage.sh $(srcroot)src integration $(C_UTIL_INTEGRATION_OBJS)
$(SHELL) $(srcroot)coverage.sh $(srcroot)test/src unit $(C_TESTLIB_UNIT_OBJS)
$(SHELL) $(srcroot)coverage.sh $(srcroot)test/src integration $(C_TESTLIB_INTEGRATION_OBJS)
$(SHELL) $(srcroot)coverage.sh $(srcroot)test/src stress $(C_TESTLIB_STRESS_OBJS)
$(SHELL) $(srcroot)coverage.sh $(srcroot)test/unit unit $(TESTS_UNIT_OBJS) $(TESTS_UNIT_AUX_OBJS)
$(SHELL) $(srcroot)coverage.sh $(srcroot)test/integration integration $(TESTS_INTEGRATION_OBJS)
$(SHELL) $(srcroot)coverage.sh $(srcroot)test/stress integration $(TESTS_STRESS_OBJS)
endif
clean:
rm -f $(C_OBJS)
rm -f $(C_PIC_OBJS)
rm -f $(C_JET_OBJS)
rm -f $(C_TESTLIB_OBJS)
rm -f $(C_OBJS:%.$(O)=%.d)
rm -f $(C_OBJS:%.$(O)=%.gcda)
rm -f $(C_OBJS:%.$(O)=%.gcno)
rm -f $(C_PIC_OBJS:%.$(O)=%.d)
rm -f $(C_PIC_OBJS:%.$(O)=%.gcda)
rm -f $(C_PIC_OBJS:%.$(O)=%.gcno)
rm -f $(C_JET_OBJS:%.$(O)=%.d)
rm -f $(C_JET_OBJS:%.$(O)=%.gcda)
rm -f $(C_JET_OBJS:%.$(O)=%.gcno)
rm -f $(C_TESTLIB_OBJS:%.$(O)=%.d)
rm -f $(C_TESTLIB_OBJS:%.$(O)=%.gcda)
rm -f $(C_TESTLIB_OBJS:%.$(O)=%.gcno)
rm -f $(TESTS_OBJS:%.$(O)=%$(EXE))
rm -f $(TESTS_OBJS)
rm -f $(TESTS_OBJS:%.$(O)=%.d)
rm -f $(TESTS_OBJS:%.$(O)=%.gcda)
rm -f $(TESTS_OBJS:%.$(O)=%.gcno)
rm -f $(TESTS_OBJS:%.$(O)=%.out)
rm -f $(DSOS) $(STATIC_LIBS)
rm -f $(objroot)*.gcov.*
distclean: clean
rm -f $(objroot)bin/jemalloc-config
rm -f $(objroot)bin/jemalloc.sh
rm -f $(objroot)bin/jeprof
rm -f $(objroot)config.log
rm -f $(objroot)config.status
rm -f $(objroot)config.stamp
rm -f $(cfghdrs_out)
rm -f $(cfgoutputs_out)
relclean: distclean
rm -f $(objroot)configure
rm -f $(objroot)VERSION
rm -f $(DOCS_HTML)
rm -f $(DOCS_MAN3)
#===============================================================================
# Re-configuration rules.
ifeq ($(enable_autogen), 1)
$(srcroot)configure : $(srcroot)configure.ac
cd ./$(srcroot) && $(AUTOCONF)
$(objroot)config.status : $(srcroot)configure
./$(objroot)config.status --recheck
$(srcroot)config.stamp.in : $(srcroot)configure.ac
echo stamp > $(srcroot)config.stamp.in
$(objroot)config.stamp : $(cfgoutputs_in) $(cfghdrs_in) $(srcroot)configure
./$(objroot)config.status
@touch $@
# There must be some action in order for make to re-read Makefile when it is
# out of date.
$(cfgoutputs_out) $(cfghdrs_out) : $(objroot)config.stamp
@true
endif

View File

@ -1,20 +0,0 @@
jemalloc is a general purpose malloc(3) implementation that emphasizes
fragmentation avoidance and scalable concurrency support. jemalloc first came
into use as the FreeBSD libc allocator in 2005, and since then it has found its
way into numerous applications that rely on its predictable behavior. In 2010
jemalloc development efforts broadened to include developer support features
such as heap profiling, Valgrind integration, and extensive monitoring/tuning
hooks. Modern jemalloc releases continue to be integrated back into FreeBSD,
and therefore versatility remains critical. Ongoing development efforts trend
toward making jemalloc among the best allocators for a broad range of demanding
applications, and eliminating/mitigating weaknesses that have practical
repercussions for real world applications.
The COPYING file contains copyright and licensing information.
The INSTALL file contains information on how to configure, build, and install
jemalloc.
The ChangeLog file contains a brief summary of changes for each release.
URL: http://www.canonware.com/jemalloc/

View File

@ -1 +0,0 @@
4.0.3-0-ge9192eacf8935e29fc62fddc2701f7942b1cc02c

View File

@ -1,17 +0,0 @@
#!/bin/sh
for i in autoconf; do
echo "$i"
$i
if [ $? -ne 0 ]; then
echo "Error $? in $i"
exit 1
fi
done
echo "./configure --enable-autogen $@"
./configure --enable-autogen $@
if [ $? -ne 0 ]; then
echo "Error $? in ./configure"
exit 1
fi

View File

@ -1,79 +0,0 @@
#!/bin/sh
usage() {
cat <<EOF
Usage:
@BINDIR@/jemalloc-config <option>
Options:
--help | -h : Print usage.
--version : Print jemalloc version.
--revision : Print shared library revision number.
--config : Print configure options used to build jemalloc.
--prefix : Print installation directory prefix.
--bindir : Print binary installation directory.
--datadir : Print data installation directory.
--includedir : Print include installation directory.
--libdir : Print library installation directory.
--mandir : Print manual page installation directory.
--cc : Print compiler used to build jemalloc.
--cflags : Print compiler flags used to build jemalloc.
--cppflags : Print preprocessor flags used to build jemalloc.
--ldflags : Print library flags used to build jemalloc.
--libs : Print libraries jemalloc was linked against.
EOF
}
prefix="@prefix@"
exec_prefix="@exec_prefix@"
case "$1" in
--help | -h)
usage
exit 0
;;
--version)
echo "@jemalloc_version@"
;;
--revision)
echo "@rev@"
;;
--config)
echo "@CONFIG@"
;;
--prefix)
echo "@PREFIX@"
;;
--bindir)
echo "@BINDIR@"
;;
--datadir)
echo "@DATADIR@"
;;
--includedir)
echo "@INCLUDEDIR@"
;;
--libdir)
echo "@LIBDIR@"
;;
--mandir)
echo "@MANDIR@"
;;
--cc)
echo "@CC@"
;;
--cflags)
echo "@CFLAGS@"
;;
--cppflags)
echo "@CPPFLAGS@"
;;
--ldflags)
echo "@LDFLAGS@ @EXTRA_LDFLAGS@"
;;
--libs)
echo "@LIBS@"
;;
*)
usage
exit 1
esac

View File

@ -1,9 +0,0 @@
#!/bin/sh
prefix=@prefix@
exec_prefix=@exec_prefix@
libdir=@libdir@
@LD_PRELOAD_VAR@=${libdir}/libjemalloc.@SOREV@
export @LD_PRELOAD_VAR@
exec "$@"

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

View File

@ -1,16 +0,0 @@
#!/bin/sh
set -e
objdir=$1
suffix=$2
shift 2
objs=$@
gcov -b -p -f -o "${objdir}" ${objs}
# Move gcov outputs so that subsequent gcov invocations won't clobber results
# for the same sources with different compilation flags.
for f in `find . -maxdepth 1 -type f -name '*.gcov'` ; do
mv "${f}" "${f}.${suffix}"
done

View File

@ -1,4 +0,0 @@
<xsl:stylesheet xmlns:xsl="http://www.w3.org/1999/XSL/Transform" version="1.0">
<xsl:import href="@XSLROOT@/html/docbook.xsl"/>
<xsl:import href="@abs_srcroot@doc/stylesheet.xsl"/>
</xsl:stylesheet>

File diff suppressed because it is too large Load Diff

File diff suppressed because one or more lines are too long

File diff suppressed because it is too large Load Diff

View File

@ -1,4 +0,0 @@
<xsl:stylesheet xmlns:xsl="http://www.w3.org/1999/XSL/Transform" version="1.0">
<xsl:import href="@XSLROOT@/manpages/docbook.xsl"/>
<xsl:import href="@abs_srcroot@doc/stylesheet.xsl"/>
</xsl:stylesheet>

View File

@ -1,7 +0,0 @@
<xsl:stylesheet xmlns:xsl="http://www.w3.org/1999/XSL/Transform" version="1.0">
<xsl:param name="funcsynopsis.style">ansi</xsl:param>
<xsl:param name="function.parens" select="1"/>
<xsl:template match="mallctl">
"<xsl:call-template name="inline.monoseq"/>"
</xsl:template>
</xsl:stylesheet>

File diff suppressed because it is too large Load Diff

View File

@ -1,651 +0,0 @@
/******************************************************************************/
#ifdef JEMALLOC_H_TYPES
#endif /* JEMALLOC_H_TYPES */
/******************************************************************************/
#ifdef JEMALLOC_H_STRUCTS
#endif /* JEMALLOC_H_STRUCTS */
/******************************************************************************/
#ifdef JEMALLOC_H_EXTERNS
#define atomic_read_uint64(p) atomic_add_uint64(p, 0)
#define atomic_read_uint32(p) atomic_add_uint32(p, 0)
#define atomic_read_p(p) atomic_add_p(p, NULL)
#define atomic_read_z(p) atomic_add_z(p, 0)
#define atomic_read_u(p) atomic_add_u(p, 0)
#endif /* JEMALLOC_H_EXTERNS */
/******************************************************************************/
#ifdef JEMALLOC_H_INLINES
/*
* All arithmetic functions return the arithmetic result of the atomic
* operation. Some atomic operation APIs return the value prior to mutation, in
* which case the following functions must redundantly compute the result so
* that it can be returned. These functions are normally inlined, so the extra
* operations can be optimized away if the return values aren't used by the
* callers.
*
* <t> atomic_read_<t>(<t> *p) { return (*p); }
* <t> atomic_add_<t>(<t> *p, <t> x) { return (*p + x); }
* <t> atomic_sub_<t>(<t> *p, <t> x) { return (*p - x); }
* bool atomic_cas_<t>(<t> *p, <t> c, <t> s)
* {
* if (*p != c)
* return (true);
* *p = s;
* return (false);
* }
* void atomic_write_<t>(<t> *p, <t> x) { *p = x; }
*/
#ifndef JEMALLOC_ENABLE_INLINE
uint64_t atomic_add_uint64(uint64_t *p, uint64_t x);
uint64_t atomic_sub_uint64(uint64_t *p, uint64_t x);
bool atomic_cas_uint64(uint64_t *p, uint64_t c, uint64_t s);
void atomic_write_uint64(uint64_t *p, uint64_t x);
uint32_t atomic_add_uint32(uint32_t *p, uint32_t x);
uint32_t atomic_sub_uint32(uint32_t *p, uint32_t x);
bool atomic_cas_uint32(uint32_t *p, uint32_t c, uint32_t s);
void atomic_write_uint32(uint32_t *p, uint32_t x);
void *atomic_add_p(void **p, void *x);
void *atomic_sub_p(void **p, void *x);
bool atomic_cas_p(void **p, void *c, void *s);
void atomic_write_p(void **p, const void *x);
size_t atomic_add_z(size_t *p, size_t x);
size_t atomic_sub_z(size_t *p, size_t x);
bool atomic_cas_z(size_t *p, size_t c, size_t s);
void atomic_write_z(size_t *p, size_t x);
unsigned atomic_add_u(unsigned *p, unsigned x);
unsigned atomic_sub_u(unsigned *p, unsigned x);
bool atomic_cas_u(unsigned *p, unsigned c, unsigned s);
void atomic_write_u(unsigned *p, unsigned x);
#endif
#if (defined(JEMALLOC_ENABLE_INLINE) || defined(JEMALLOC_ATOMIC_C_))
/******************************************************************************/
/* 64-bit operations. */
#if (LG_SIZEOF_PTR == 3 || LG_SIZEOF_INT == 3)
# if (defined(__amd64__) || defined(__x86_64__))
JEMALLOC_INLINE uint64_t
atomic_add_uint64(uint64_t *p, uint64_t x)
{
uint64_t t = x;
asm volatile (
"lock; xaddq %0, %1;"
: "+r" (t), "=m" (*p) /* Outputs. */
: "m" (*p) /* Inputs. */
);
return (t + x);
}
JEMALLOC_INLINE uint64_t
atomic_sub_uint64(uint64_t *p, uint64_t x)
{
uint64_t t;
x = (uint64_t)(-(int64_t)x);
t = x;
asm volatile (
"lock; xaddq %0, %1;"
: "+r" (t), "=m" (*p) /* Outputs. */
: "m" (*p) /* Inputs. */
);
return (t + x);
}
JEMALLOC_INLINE bool
atomic_cas_uint64(uint64_t *p, uint64_t c, uint64_t s)
{
uint8_t success;
asm volatile (
"lock; cmpxchgq %4, %0;"
"sete %1;"
: "=m" (*p), "=a" (success) /* Outputs. */
: "m" (*p), "a" (c), "r" (s) /* Inputs. */
: "memory" /* Clobbers. */
);
return (!(bool)success);
}
JEMALLOC_INLINE void
atomic_write_uint64(uint64_t *p, uint64_t x)
{
asm volatile (
"xchgq %1, %0;" /* Lock is implied by xchgq. */
: "=m" (*p), "+r" (x) /* Outputs. */
: "m" (*p) /* Inputs. */
: "memory" /* Clobbers. */
);
}
# elif (defined(JEMALLOC_C11ATOMICS))
JEMALLOC_INLINE uint64_t
atomic_add_uint64(uint64_t *p, uint64_t x)
{
volatile atomic_uint_least64_t *a = (volatile atomic_uint_least64_t *)p;
return (atomic_fetch_add(a, x) + x);
}
JEMALLOC_INLINE uint64_t
atomic_sub_uint64(uint64_t *p, uint64_t x)
{
volatile atomic_uint_least64_t *a = (volatile atomic_uint_least64_t *)p;
return (atomic_fetch_sub(a, x) - x);
}
JEMALLOC_INLINE bool
atomic_cas_uint64(uint64_t *p, uint64_t c, uint64_t s)
{
volatile atomic_uint_least64_t *a = (volatile atomic_uint_least64_t *)p;
return (!atomic_compare_exchange_strong(a, &c, s));
}
JEMALLOC_INLINE void
atomic_write_uint64(uint64_t *p, uint64_t x)
{
volatile atomic_uint_least64_t *a = (volatile atomic_uint_least64_t *)p;
atomic_store(a, x);
}
# elif (defined(JEMALLOC_ATOMIC9))
JEMALLOC_INLINE uint64_t
atomic_add_uint64(uint64_t *p, uint64_t x)
{
/*
* atomic_fetchadd_64() doesn't exist, but we only ever use this
* function on LP64 systems, so atomic_fetchadd_long() will do.
*/
assert(sizeof(uint64_t) == sizeof(unsigned long));
return (atomic_fetchadd_long(p, (unsigned long)x) + x);
}
JEMALLOC_INLINE uint64_t
atomic_sub_uint64(uint64_t *p, uint64_t x)
{
assert(sizeof(uint64_t) == sizeof(unsigned long));
return (atomic_fetchadd_long(p, (unsigned long)(-(long)x)) - x);
}
JEMALLOC_INLINE bool
atomic_cas_uint64(uint64_t *p, uint64_t c, uint64_t s)
{
assert(sizeof(uint64_t) == sizeof(unsigned long));
return (!atomic_cmpset_long(p, (unsigned long)c, (unsigned long)s));
}
JEMALLOC_INLINE void
atomic_write_uint64(uint64_t *p, uint64_t x)
{
assert(sizeof(uint64_t) == sizeof(unsigned long));
atomic_store_rel_long(p, x);
}
# elif (defined(JEMALLOC_OSATOMIC))
JEMALLOC_INLINE uint64_t
atomic_add_uint64(uint64_t *p, uint64_t x)
{
return (OSAtomicAdd64((int64_t)x, (int64_t *)p));
}
JEMALLOC_INLINE uint64_t
atomic_sub_uint64(uint64_t *p, uint64_t x)
{
return (OSAtomicAdd64(-((int64_t)x), (int64_t *)p));
}
JEMALLOC_INLINE bool
atomic_cas_uint64(uint64_t *p, uint64_t c, uint64_t s)
{
return (!OSAtomicCompareAndSwap64(c, s, (int64_t *)p));
}
JEMALLOC_INLINE void
atomic_write_uint64(uint64_t *p, uint64_t x)
{
uint64_t o;
/*The documented OSAtomic*() API does not expose an atomic exchange. */
do {
o = atomic_read_uint64(p);
} while (atomic_cas_uint64(p, o, x));
}
# elif (defined(_MSC_VER))
JEMALLOC_INLINE uint64_t
atomic_add_uint64(uint64_t *p, uint64_t x)
{
return (InterlockedExchangeAdd64(p, x) + x);
}
JEMALLOC_INLINE uint64_t
atomic_sub_uint64(uint64_t *p, uint64_t x)
{
return (InterlockedExchangeAdd64(p, -((int64_t)x)) - x);
}
JEMALLOC_INLINE bool
atomic_cas_uint64(uint64_t *p, uint64_t c, uint64_t s)
{
uint64_t o;
o = InterlockedCompareExchange64(p, s, c);
return (o != c);
}
JEMALLOC_INLINE void
atomic_write_uint64(uint64_t *p, uint64_t x)
{
InterlockedExchange64(p, x);
}
# elif (defined(__GCC_HAVE_SYNC_COMPARE_AND_SWAP_8) || \
defined(JE_FORCE_SYNC_COMPARE_AND_SWAP_8))
JEMALLOC_INLINE uint64_t
atomic_add_uint64(uint64_t *p, uint64_t x)
{
return (__sync_add_and_fetch(p, x));
}
JEMALLOC_INLINE uint64_t
atomic_sub_uint64(uint64_t *p, uint64_t x)
{
return (__sync_sub_and_fetch(p, x));
}
JEMALLOC_INLINE bool
atomic_cas_uint64(uint64_t *p, uint64_t c, uint64_t s)
{
return (!__sync_bool_compare_and_swap(p, c, s));
}
JEMALLOC_INLINE void
atomic_write_uint64(uint64_t *p, uint64_t x)
{
__sync_lock_test_and_set(p, x);
}
# else
# error "Missing implementation for 64-bit atomic operations"
# endif
#endif
/******************************************************************************/
/* 32-bit operations. */
#if (defined(__i386__) || defined(__amd64__) || defined(__x86_64__))
JEMALLOC_INLINE uint32_t
atomic_add_uint32(uint32_t *p, uint32_t x)
{
uint32_t t = x;
asm volatile (
"lock; xaddl %0, %1;"
: "+r" (t), "=m" (*p) /* Outputs. */
: "m" (*p) /* Inputs. */
);
return (t + x);
}
JEMALLOC_INLINE uint32_t
atomic_sub_uint32(uint32_t *p, uint32_t x)
{
uint32_t t;
x = (uint32_t)(-(int32_t)x);
t = x;
asm volatile (
"lock; xaddl %0, %1;"
: "+r" (t), "=m" (*p) /* Outputs. */
: "m" (*p) /* Inputs. */
);
return (t + x);
}
JEMALLOC_INLINE bool
atomic_cas_uint32(uint32_t *p, uint32_t c, uint32_t s)
{
uint8_t success;
asm volatile (
"lock; cmpxchgl %4, %0;"
"sete %1;"
: "=m" (*p), "=a" (success) /* Outputs. */
: "m" (*p), "a" (c), "r" (s) /* Inputs. */
: "memory"
);
return (!(bool)success);
}
JEMALLOC_INLINE void
atomic_write_uint32(uint32_t *p, uint32_t x)
{
asm volatile (
"xchgl %1, %0;" /* Lock is implied by xchgl. */
: "=m" (*p), "+r" (x) /* Outputs. */
: "m" (*p) /* Inputs. */
: "memory" /* Clobbers. */
);
}
# elif (defined(JEMALLOC_C11ATOMICS))
JEMALLOC_INLINE uint32_t
atomic_add_uint32(uint32_t *p, uint32_t x)
{
volatile atomic_uint_least32_t *a = (volatile atomic_uint_least32_t *)p;
return (atomic_fetch_add(a, x) + x);
}
JEMALLOC_INLINE uint32_t
atomic_sub_uint32(uint32_t *p, uint32_t x)
{
volatile atomic_uint_least32_t *a = (volatile atomic_uint_least32_t *)p;
return (atomic_fetch_sub(a, x) - x);
}
JEMALLOC_INLINE bool
atomic_cas_uint32(uint32_t *p, uint32_t c, uint32_t s)
{
volatile atomic_uint_least32_t *a = (volatile atomic_uint_least32_t *)p;
return (!atomic_compare_exchange_strong(a, &c, s));
}
JEMALLOC_INLINE void
atomic_write_uint32(uint32_t *p, uint32_t x)
{
volatile atomic_uint_least32_t *a = (volatile atomic_uint_least32_t *)p;
atomic_store(a, x);
}
#elif (defined(JEMALLOC_ATOMIC9))
JEMALLOC_INLINE uint32_t
atomic_add_uint32(uint32_t *p, uint32_t x)
{
return (atomic_fetchadd_32(p, x) + x);
}
JEMALLOC_INLINE uint32_t
atomic_sub_uint32(uint32_t *p, uint32_t x)
{
return (atomic_fetchadd_32(p, (uint32_t)(-(int32_t)x)) - x);
}
JEMALLOC_INLINE bool
atomic_cas_uint32(uint32_t *p, uint32_t c, uint32_t s)
{
return (!atomic_cmpset_32(p, c, s));
}
JEMALLOC_INLINE void
atomic_write_uint32(uint32_t *p, uint32_t x)
{
atomic_store_rel_32(p, x);
}
#elif (defined(JEMALLOC_OSATOMIC))
JEMALLOC_INLINE uint32_t
atomic_add_uint32(uint32_t *p, uint32_t x)
{
return (OSAtomicAdd32((int32_t)x, (int32_t *)p));
}
JEMALLOC_INLINE uint32_t
atomic_sub_uint32(uint32_t *p, uint32_t x)
{
return (OSAtomicAdd32(-((int32_t)x), (int32_t *)p));
}
JEMALLOC_INLINE bool
atomic_cas_uint32(uint32_t *p, uint32_t c, uint32_t s)
{
return (!OSAtomicCompareAndSwap32(c, s, (int32_t *)p));
}
JEMALLOC_INLINE void
atomic_write_uint32(uint32_t *p, uint32_t x)
{
uint32_t o;
/*The documented OSAtomic*() API does not expose an atomic exchange. */
do {
o = atomic_read_uint32(p);
} while (atomic_cas_uint32(p, o, x));
}
#elif (defined(_MSC_VER))
JEMALLOC_INLINE uint32_t
atomic_add_uint32(uint32_t *p, uint32_t x)
{
return (InterlockedExchangeAdd(p, x) + x);
}
JEMALLOC_INLINE uint32_t
atomic_sub_uint32(uint32_t *p, uint32_t x)
{
return (InterlockedExchangeAdd(p, -((int32_t)x)) - x);
}
JEMALLOC_INLINE bool
atomic_cas_uint32(uint32_t *p, uint32_t c, uint32_t s)
{
uint32_t o;
o = InterlockedCompareExchange(p, s, c);
return (o != c);
}
JEMALLOC_INLINE void
atomic_write_uint32(uint32_t *p, uint32_t x)
{
InterlockedExchange(p, x);
}
#elif (defined(__GCC_HAVE_SYNC_COMPARE_AND_SWAP_4) || \
defined(JE_FORCE_SYNC_COMPARE_AND_SWAP_4))
JEMALLOC_INLINE uint32_t
atomic_add_uint32(uint32_t *p, uint32_t x)
{
return (__sync_add_and_fetch(p, x));
}
JEMALLOC_INLINE uint32_t
atomic_sub_uint32(uint32_t *p, uint32_t x)
{
return (__sync_sub_and_fetch(p, x));
}
JEMALLOC_INLINE bool
atomic_cas_uint32(uint32_t *p, uint32_t c, uint32_t s)
{
return (!__sync_bool_compare_and_swap(p, c, s));
}
JEMALLOC_INLINE void
atomic_write_uint32(uint32_t *p, uint32_t x)
{
__sync_lock_test_and_set(p, x);
}
#else
# error "Missing implementation for 32-bit atomic operations"
#endif
/******************************************************************************/
/* Pointer operations. */
JEMALLOC_INLINE void *
atomic_add_p(void **p, void *x)
{
#if (LG_SIZEOF_PTR == 3)
return ((void *)atomic_add_uint64((uint64_t *)p, (uint64_t)x));
#elif (LG_SIZEOF_PTR == 2)
return ((void *)atomic_add_uint32((uint32_t *)p, (uint32_t)x));
#endif
}
JEMALLOC_INLINE void *
atomic_sub_p(void **p, void *x)
{
#if (LG_SIZEOF_PTR == 3)
return ((void *)atomic_add_uint64((uint64_t *)p,
(uint64_t)-((int64_t)x)));
#elif (LG_SIZEOF_PTR == 2)
return ((void *)atomic_add_uint32((uint32_t *)p,
(uint32_t)-((int32_t)x)));
#endif
}
JEMALLOC_INLINE bool
atomic_cas_p(void **p, void *c, void *s)
{
#if (LG_SIZEOF_PTR == 3)
return (atomic_cas_uint64((uint64_t *)p, (uint64_t)c, (uint64_t)s));
#elif (LG_SIZEOF_PTR == 2)
return (atomic_cas_uint32((uint32_t *)p, (uint32_t)c, (uint32_t)s));
#endif
}
JEMALLOC_INLINE void
atomic_write_p(void **p, const void *x)
{
#if (LG_SIZEOF_PTR == 3)
atomic_write_uint64((uint64_t *)p, (uint64_t)x);
#elif (LG_SIZEOF_PTR == 2)
atomic_write_uint32((uint32_t *)p, (uint32_t)x);
#endif
}
/******************************************************************************/
/* size_t operations. */
JEMALLOC_INLINE size_t
atomic_add_z(size_t *p, size_t x)
{
#if (LG_SIZEOF_PTR == 3)
return ((size_t)atomic_add_uint64((uint64_t *)p, (uint64_t)x));
#elif (LG_SIZEOF_PTR == 2)
return ((size_t)atomic_add_uint32((uint32_t *)p, (uint32_t)x));
#endif
}
JEMALLOC_INLINE size_t
atomic_sub_z(size_t *p, size_t x)
{
#if (LG_SIZEOF_PTR == 3)
return ((size_t)atomic_add_uint64((uint64_t *)p,
(uint64_t)-((int64_t)x)));
#elif (LG_SIZEOF_PTR == 2)
return ((size_t)atomic_add_uint32((uint32_t *)p,
(uint32_t)-((int32_t)x)));
#endif
}
JEMALLOC_INLINE bool
atomic_cas_z(size_t *p, size_t c, size_t s)
{
#if (LG_SIZEOF_PTR == 3)
return (atomic_cas_uint64((uint64_t *)p, (uint64_t)c, (uint64_t)s));
#elif (LG_SIZEOF_PTR == 2)
return (atomic_cas_uint32((uint32_t *)p, (uint32_t)c, (uint32_t)s));
#endif
}
JEMALLOC_INLINE void
atomic_write_z(size_t *p, size_t x)
{
#if (LG_SIZEOF_PTR == 3)
atomic_write_uint64((uint64_t *)p, (uint64_t)x);
#elif (LG_SIZEOF_PTR == 2)
atomic_write_uint32((uint32_t *)p, (uint32_t)x);
#endif
}
/******************************************************************************/
/* unsigned operations. */
JEMALLOC_INLINE unsigned
atomic_add_u(unsigned *p, unsigned x)
{
#if (LG_SIZEOF_INT == 3)
return ((unsigned)atomic_add_uint64((uint64_t *)p, (uint64_t)x));
#elif (LG_SIZEOF_INT == 2)
return ((unsigned)atomic_add_uint32((uint32_t *)p, (uint32_t)x));
#endif
}
JEMALLOC_INLINE unsigned
atomic_sub_u(unsigned *p, unsigned x)
{
#if (LG_SIZEOF_INT == 3)
return ((unsigned)atomic_add_uint64((uint64_t *)p,
(uint64_t)-((int64_t)x)));
#elif (LG_SIZEOF_INT == 2)
return ((unsigned)atomic_add_uint32((uint32_t *)p,
(uint32_t)-((int32_t)x)));
#endif
}
JEMALLOC_INLINE bool
atomic_cas_u(unsigned *p, unsigned c, unsigned s)
{
#if (LG_SIZEOF_INT == 3)
return (atomic_cas_uint64((uint64_t *)p, (uint64_t)c, (uint64_t)s));
#elif (LG_SIZEOF_INT == 2)
return (atomic_cas_uint32((uint32_t *)p, (uint32_t)c, (uint32_t)s));
#endif
}
JEMALLOC_INLINE void
atomic_write_u(unsigned *p, unsigned x)
{
#if (LG_SIZEOF_INT == 3)
atomic_write_uint64((uint64_t *)p, (uint64_t)x);
#elif (LG_SIZEOF_INT == 2)
atomic_write_uint32((uint32_t *)p, (uint32_t)x);
#endif
}
/******************************************************************************/
#endif
#endif /* JEMALLOC_H_INLINES */
/******************************************************************************/

View File

@ -1,24 +0,0 @@
/******************************************************************************/
#ifdef JEMALLOC_H_TYPES
#endif /* JEMALLOC_H_TYPES */
/******************************************************************************/
#ifdef JEMALLOC_H_STRUCTS
#endif /* JEMALLOC_H_STRUCTS */
/******************************************************************************/
#ifdef JEMALLOC_H_EXTERNS
void *base_alloc(size_t size);
void base_stats_get(size_t *allocated, size_t *resident, size_t *mapped);
bool base_boot(void);
void base_prefork(void);
void base_postfork_parent(void);
void base_postfork_child(void);
#endif /* JEMALLOC_H_EXTERNS */
/******************************************************************************/
#ifdef JEMALLOC_H_INLINES
#endif /* JEMALLOC_H_INLINES */
/******************************************************************************/

View File

@ -1,230 +0,0 @@
/******************************************************************************/
#ifdef JEMALLOC_H_TYPES
/* Maximum bitmap bit count is 2^LG_BITMAP_MAXBITS. */
#define LG_BITMAP_MAXBITS LG_RUN_MAXREGS
#define BITMAP_MAXBITS (ZU(1) << LG_BITMAP_MAXBITS)
typedef struct bitmap_level_s bitmap_level_t;
typedef struct bitmap_info_s bitmap_info_t;
typedef unsigned long bitmap_t;
#define LG_SIZEOF_BITMAP LG_SIZEOF_LONG
/* Number of bits per group. */
#define LG_BITMAP_GROUP_NBITS (LG_SIZEOF_BITMAP + 3)
#define BITMAP_GROUP_NBITS (ZU(1) << LG_BITMAP_GROUP_NBITS)
#define BITMAP_GROUP_NBITS_MASK (BITMAP_GROUP_NBITS-1)
/* Number of groups required to store a given number of bits. */
#define BITMAP_BITS2GROUPS(nbits) \
((nbits + BITMAP_GROUP_NBITS_MASK) >> LG_BITMAP_GROUP_NBITS)
/*
* Number of groups required at a particular level for a given number of bits.
*/
#define BITMAP_GROUPS_L0(nbits) \
BITMAP_BITS2GROUPS(nbits)
#define BITMAP_GROUPS_L1(nbits) \
BITMAP_BITS2GROUPS(BITMAP_BITS2GROUPS(nbits))
#define BITMAP_GROUPS_L2(nbits) \
BITMAP_BITS2GROUPS(BITMAP_BITS2GROUPS(BITMAP_BITS2GROUPS((nbits))))
#define BITMAP_GROUPS_L3(nbits) \
BITMAP_BITS2GROUPS(BITMAP_BITS2GROUPS(BITMAP_BITS2GROUPS( \
BITMAP_BITS2GROUPS((nbits)))))
/*
* Assuming the number of levels, number of groups required for a given number
* of bits.
*/
#define BITMAP_GROUPS_1_LEVEL(nbits) \
BITMAP_GROUPS_L0(nbits)
#define BITMAP_GROUPS_2_LEVEL(nbits) \
(BITMAP_GROUPS_1_LEVEL(nbits) + BITMAP_GROUPS_L1(nbits))
#define BITMAP_GROUPS_3_LEVEL(nbits) \
(BITMAP_GROUPS_2_LEVEL(nbits) + BITMAP_GROUPS_L2(nbits))
#define BITMAP_GROUPS_4_LEVEL(nbits) \
(BITMAP_GROUPS_3_LEVEL(nbits) + BITMAP_GROUPS_L3(nbits))
/*
* Maximum number of groups required to support LG_BITMAP_MAXBITS.
*/
#if LG_BITMAP_MAXBITS <= LG_BITMAP_GROUP_NBITS
# define BITMAP_GROUPS_MAX BITMAP_GROUPS_1_LEVEL(BITMAP_MAXBITS)
#elif LG_BITMAP_MAXBITS <= LG_BITMAP_GROUP_NBITS * 2
# define BITMAP_GROUPS_MAX BITMAP_GROUPS_2_LEVEL(BITMAP_MAXBITS)
#elif LG_BITMAP_MAXBITS <= LG_BITMAP_GROUP_NBITS * 3
# define BITMAP_GROUPS_MAX BITMAP_GROUPS_3_LEVEL(BITMAP_MAXBITS)
#elif LG_BITMAP_MAXBITS <= LG_BITMAP_GROUP_NBITS * 4
# define BITMAP_GROUPS_MAX BITMAP_GROUPS_4_LEVEL(BITMAP_MAXBITS)
#else
# error "Unsupported bitmap size"
#endif
/* Maximum number of levels possible. */
#define BITMAP_MAX_LEVELS \
(LG_BITMAP_MAXBITS / LG_SIZEOF_BITMAP) \
+ !!(LG_BITMAP_MAXBITS % LG_SIZEOF_BITMAP)
#endif /* JEMALLOC_H_TYPES */
/******************************************************************************/
#ifdef JEMALLOC_H_STRUCTS
struct bitmap_level_s {
/* Offset of this level's groups within the array of groups. */
size_t group_offset;
};
struct bitmap_info_s {
/* Logical number of bits in bitmap (stored at bottom level). */
size_t nbits;
/* Number of levels necessary for nbits. */
unsigned nlevels;
/*
* Only the first (nlevels+1) elements are used, and levels are ordered
* bottom to top (e.g. the bottom level is stored in levels[0]).
*/
bitmap_level_t levels[BITMAP_MAX_LEVELS+1];
};
#endif /* JEMALLOC_H_STRUCTS */
/******************************************************************************/
#ifdef JEMALLOC_H_EXTERNS
void bitmap_info_init(bitmap_info_t *binfo, size_t nbits);
size_t bitmap_info_ngroups(const bitmap_info_t *binfo);
size_t bitmap_size(size_t nbits);
void bitmap_init(bitmap_t *bitmap, const bitmap_info_t *binfo);
#endif /* JEMALLOC_H_EXTERNS */
/******************************************************************************/
#ifdef JEMALLOC_H_INLINES
#ifndef JEMALLOC_ENABLE_INLINE
bool bitmap_full(bitmap_t *bitmap, const bitmap_info_t *binfo);
bool bitmap_get(bitmap_t *bitmap, const bitmap_info_t *binfo, size_t bit);
void bitmap_set(bitmap_t *bitmap, const bitmap_info_t *binfo, size_t bit);
size_t bitmap_sfu(bitmap_t *bitmap, const bitmap_info_t *binfo);
void bitmap_unset(bitmap_t *bitmap, const bitmap_info_t *binfo, size_t bit);
#endif
#if (defined(JEMALLOC_ENABLE_INLINE) || defined(JEMALLOC_BITMAP_C_))
JEMALLOC_INLINE bool
bitmap_full(bitmap_t *bitmap, const bitmap_info_t *binfo)
{
unsigned rgoff = binfo->levels[binfo->nlevels].group_offset - 1;
bitmap_t rg = bitmap[rgoff];
/* The bitmap is full iff the root group is 0. */
return (rg == 0);
}
JEMALLOC_INLINE bool
bitmap_get(bitmap_t *bitmap, const bitmap_info_t *binfo, size_t bit)
{
size_t goff;
bitmap_t g;
assert(bit < binfo->nbits);
goff = bit >> LG_BITMAP_GROUP_NBITS;
g = bitmap[goff];
return (!(g & (1LU << (bit & BITMAP_GROUP_NBITS_MASK))));
}
JEMALLOC_INLINE void
bitmap_set(bitmap_t *bitmap, const bitmap_info_t *binfo, size_t bit)
{
size_t goff;
bitmap_t *gp;
bitmap_t g;
assert(bit < binfo->nbits);
assert(!bitmap_get(bitmap, binfo, bit));
goff = bit >> LG_BITMAP_GROUP_NBITS;
gp = &bitmap[goff];
g = *gp;
assert(g & (1LU << (bit & BITMAP_GROUP_NBITS_MASK)));
g ^= 1LU << (bit & BITMAP_GROUP_NBITS_MASK);
*gp = g;
assert(bitmap_get(bitmap, binfo, bit));
/* Propagate group state transitions up the tree. */
if (g == 0) {
unsigned i;
for (i = 1; i < binfo->nlevels; i++) {
bit = goff;
goff = bit >> LG_BITMAP_GROUP_NBITS;
gp = &bitmap[binfo->levels[i].group_offset + goff];
g = *gp;
assert(g & (1LU << (bit & BITMAP_GROUP_NBITS_MASK)));
g ^= 1LU << (bit & BITMAP_GROUP_NBITS_MASK);
*gp = g;
if (g != 0)
break;
}
}
}
/* sfu: set first unset. */
JEMALLOC_INLINE size_t
bitmap_sfu(bitmap_t *bitmap, const bitmap_info_t *binfo)
{
size_t bit;
bitmap_t g;
unsigned i;
assert(!bitmap_full(bitmap, binfo));
i = binfo->nlevels - 1;
g = bitmap[binfo->levels[i].group_offset];
bit = jemalloc_ffsl(g) - 1;
while (i > 0) {
i--;
g = bitmap[binfo->levels[i].group_offset + bit];
bit = (bit << LG_BITMAP_GROUP_NBITS) + (jemalloc_ffsl(g) - 1);
}
bitmap_set(bitmap, binfo, bit);
return (bit);
}
JEMALLOC_INLINE void
bitmap_unset(bitmap_t *bitmap, const bitmap_info_t *binfo, size_t bit)
{
size_t goff;
bitmap_t *gp;
bitmap_t g;
bool propagate;
assert(bit < binfo->nbits);
assert(bitmap_get(bitmap, binfo, bit));
goff = bit >> LG_BITMAP_GROUP_NBITS;
gp = &bitmap[goff];
g = *gp;
propagate = (g == 0);
assert((g & (1LU << (bit & BITMAP_GROUP_NBITS_MASK))) == 0);
g ^= 1LU << (bit & BITMAP_GROUP_NBITS_MASK);
*gp = g;
assert(!bitmap_get(bitmap, binfo, bit));
/* Propagate group state transitions up the tree. */
if (propagate) {
unsigned i;
for (i = 1; i < binfo->nlevels; i++) {
bit = goff;
goff = bit >> LG_BITMAP_GROUP_NBITS;
gp = &bitmap[binfo->levels[i].group_offset + goff];
g = *gp;
propagate = (g == 0);
assert((g & (1LU << (bit & BITMAP_GROUP_NBITS_MASK)))
== 0);
g ^= 1LU << (bit & BITMAP_GROUP_NBITS_MASK);
*gp = g;
if (!propagate)
break;
}
}
}
#endif
#endif /* JEMALLOC_H_INLINES */
/******************************************************************************/

View File

@ -1,99 +0,0 @@
/******************************************************************************/
#ifdef JEMALLOC_H_TYPES
/*
* Size and alignment of memory chunks that are allocated by the OS's virtual
* memory system.
*/
#define LG_CHUNK_DEFAULT 21
/* Return the chunk address for allocation address a. */
#define CHUNK_ADDR2BASE(a) \
((void *)((uintptr_t)(a) & ~chunksize_mask))
/* Return the chunk offset of address a. */
#define CHUNK_ADDR2OFFSET(a) \
((size_t)((uintptr_t)(a) & chunksize_mask))
/* Return the smallest chunk multiple that is >= s. */
#define CHUNK_CEILING(s) \
(((s) + chunksize_mask) & ~chunksize_mask)
#define CHUNK_HOOKS_INITIALIZER { \
NULL, \
NULL, \
NULL, \
NULL, \
NULL, \
NULL, \
NULL \
}
#endif /* JEMALLOC_H_TYPES */
/******************************************************************************/
#ifdef JEMALLOC_H_STRUCTS
#endif /* JEMALLOC_H_STRUCTS */
/******************************************************************************/
#ifdef JEMALLOC_H_EXTERNS
extern size_t opt_lg_chunk;
extern const char *opt_dss;
extern rtree_t chunks_rtree;
extern size_t chunksize;
extern size_t chunksize_mask; /* (chunksize - 1). */
extern size_t chunk_npages;
extern const chunk_hooks_t chunk_hooks_default;
chunk_hooks_t chunk_hooks_get(arena_t *arena);
chunk_hooks_t chunk_hooks_set(arena_t *arena,
const chunk_hooks_t *chunk_hooks);
bool chunk_register(const void *chunk, const extent_node_t *node);
void chunk_deregister(const void *chunk, const extent_node_t *node);
void *chunk_alloc_base(size_t size);
void *chunk_alloc_cache(arena_t *arena, chunk_hooks_t *chunk_hooks,
void *new_addr, size_t size, size_t alignment, bool *zero,
bool dalloc_node);
void *chunk_alloc_wrapper(arena_t *arena, chunk_hooks_t *chunk_hooks,
void *new_addr, size_t size, size_t alignment, bool *zero, bool *commit);
void chunk_dalloc_cache(arena_t *arena, chunk_hooks_t *chunk_hooks,
void *chunk, size_t size, bool committed);
void chunk_dalloc_arena(arena_t *arena, chunk_hooks_t *chunk_hooks,
void *chunk, size_t size, bool zeroed, bool committed);
void chunk_dalloc_wrapper(arena_t *arena, chunk_hooks_t *chunk_hooks,
void *chunk, size_t size, bool committed);
bool chunk_purge_arena(arena_t *arena, void *chunk, size_t offset,
size_t length);
bool chunk_purge_wrapper(arena_t *arena, chunk_hooks_t *chunk_hooks,
void *chunk, size_t size, size_t offset, size_t length);
bool chunk_boot(void);
void chunk_prefork(void);
void chunk_postfork_parent(void);
void chunk_postfork_child(void);
#endif /* JEMALLOC_H_EXTERNS */
/******************************************************************************/
#ifdef JEMALLOC_H_INLINES
#ifndef JEMALLOC_ENABLE_INLINE
extent_node_t *chunk_lookup(const void *chunk, bool dependent);
#endif
#if (defined(JEMALLOC_ENABLE_INLINE) || defined(JEMALLOC_CHUNK_C_))
JEMALLOC_INLINE extent_node_t *
chunk_lookup(const void *ptr, bool dependent)
{
return (rtree_get(&chunks_rtree, (uintptr_t)ptr, dependent));
}
#endif
#endif /* JEMALLOC_H_INLINES */
/******************************************************************************/
#include "jemalloc/internal/chunk_dss.h"
#include "jemalloc/internal/chunk_mmap.h"

View File

@ -1,39 +0,0 @@
/******************************************************************************/
#ifdef JEMALLOC_H_TYPES
typedef enum {
dss_prec_disabled = 0,
dss_prec_primary = 1,
dss_prec_secondary = 2,
dss_prec_limit = 3
} dss_prec_t;
#define DSS_PREC_DEFAULT dss_prec_secondary
#define DSS_DEFAULT "secondary"
#endif /* JEMALLOC_H_TYPES */
/******************************************************************************/
#ifdef JEMALLOC_H_STRUCTS
extern const char *dss_prec_names[];
#endif /* JEMALLOC_H_STRUCTS */
/******************************************************************************/
#ifdef JEMALLOC_H_EXTERNS
dss_prec_t chunk_dss_prec_get(void);
bool chunk_dss_prec_set(dss_prec_t dss_prec);
void *chunk_alloc_dss(arena_t *arena, void *new_addr, size_t size,
size_t alignment, bool *zero, bool *commit);
bool chunk_in_dss(void *chunk);
bool chunk_dss_boot(void);
void chunk_dss_prefork(void);
void chunk_dss_postfork_parent(void);
void chunk_dss_postfork_child(void);
#endif /* JEMALLOC_H_EXTERNS */
/******************************************************************************/
#ifdef JEMALLOC_H_INLINES
#endif /* JEMALLOC_H_INLINES */
/******************************************************************************/

View File

@ -1,21 +0,0 @@
/******************************************************************************/
#ifdef JEMALLOC_H_TYPES
#endif /* JEMALLOC_H_TYPES */
/******************************************************************************/
#ifdef JEMALLOC_H_STRUCTS
#endif /* JEMALLOC_H_STRUCTS */
/******************************************************************************/
#ifdef JEMALLOC_H_EXTERNS
void *chunk_alloc_mmap(size_t size, size_t alignment, bool *zero,
bool *commit);
bool chunk_dalloc_mmap(void *chunk, size_t size);
#endif /* JEMALLOC_H_EXTERNS */
/******************************************************************************/
#ifdef JEMALLOC_H_INLINES
#endif /* JEMALLOC_H_INLINES */
/******************************************************************************/

View File

@ -1,88 +0,0 @@
/******************************************************************************/
#ifdef JEMALLOC_H_TYPES
typedef struct ckh_s ckh_t;
typedef struct ckhc_s ckhc_t;
/* Typedefs to allow easy function pointer passing. */
typedef void ckh_hash_t (const void *, size_t[2]);
typedef bool ckh_keycomp_t (const void *, const void *);
/* Maintain counters used to get an idea of performance. */
/* #define CKH_COUNT */
/* Print counter values in ckh_delete() (requires CKH_COUNT). */
/* #define CKH_VERBOSE */
/*
* There are 2^LG_CKH_BUCKET_CELLS cells in each hash table bucket. Try to fit
* one bucket per L1 cache line.
*/
#define LG_CKH_BUCKET_CELLS (LG_CACHELINE - LG_SIZEOF_PTR - 1)
#endif /* JEMALLOC_H_TYPES */
/******************************************************************************/
#ifdef JEMALLOC_H_STRUCTS
/* Hash table cell. */
struct ckhc_s {
const void *key;
const void *data;
};
struct ckh_s {
#ifdef CKH_COUNT
/* Counters used to get an idea of performance. */
uint64_t ngrows;
uint64_t nshrinks;
uint64_t nshrinkfails;
uint64_t ninserts;
uint64_t nrelocs;
#endif
/* Used for pseudo-random number generation. */
#define CKH_A 1103515241
#define CKH_C 12347
uint32_t prng_state;
/* Total number of items. */
size_t count;
/*
* Minimum and current number of hash table buckets. There are
* 2^LG_CKH_BUCKET_CELLS cells per bucket.
*/
unsigned lg_minbuckets;
unsigned lg_curbuckets;
/* Hash and comparison functions. */
ckh_hash_t *hash;
ckh_keycomp_t *keycomp;
/* Hash table with 2^lg_curbuckets buckets. */
ckhc_t *tab;
};
#endif /* JEMALLOC_H_STRUCTS */
/******************************************************************************/
#ifdef JEMALLOC_H_EXTERNS
bool ckh_new(tsd_t *tsd, ckh_t *ckh, size_t minitems, ckh_hash_t *hash,
ckh_keycomp_t *keycomp);
void ckh_delete(tsd_t *tsd, ckh_t *ckh);
size_t ckh_count(ckh_t *ckh);
bool ckh_iter(ckh_t *ckh, size_t *tabind, void **key, void **data);
bool ckh_insert(tsd_t *tsd, ckh_t *ckh, const void *key, const void *data);
bool ckh_remove(tsd_t *tsd, ckh_t *ckh, const void *searchkey, void **key,
void **data);
bool ckh_search(ckh_t *ckh, const void *seachkey, void **key, void **data);
void ckh_string_hash(const void *key, size_t r_hash[2]);
bool ckh_string_keycomp(const void *k1, const void *k2);
void ckh_pointer_hash(const void *key, size_t r_hash[2]);
bool ckh_pointer_keycomp(const void *k1, const void *k2);
#endif /* JEMALLOC_H_EXTERNS */
/******************************************************************************/
#ifdef JEMALLOC_H_INLINES
#endif /* JEMALLOC_H_INLINES */
/******************************************************************************/

View File

@ -1,111 +0,0 @@
/******************************************************************************/
#ifdef JEMALLOC_H_TYPES
typedef struct ctl_node_s ctl_node_t;
typedef struct ctl_named_node_s ctl_named_node_t;
typedef struct ctl_indexed_node_s ctl_indexed_node_t;
typedef struct ctl_arena_stats_s ctl_arena_stats_t;
typedef struct ctl_stats_s ctl_stats_t;
#endif /* JEMALLOC_H_TYPES */
/******************************************************************************/
#ifdef JEMALLOC_H_STRUCTS
struct ctl_node_s {
bool named;
};
struct ctl_named_node_s {
struct ctl_node_s node;
const char *name;
/* If (nchildren == 0), this is a terminal node. */
unsigned nchildren;
const ctl_node_t *children;
int (*ctl)(const size_t *, size_t, void *, size_t *,
void *, size_t);
};
struct ctl_indexed_node_s {
struct ctl_node_s node;
const ctl_named_node_t *(*index)(const size_t *, size_t, size_t);
};
struct ctl_arena_stats_s {
bool initialized;
unsigned nthreads;
const char *dss;
ssize_t lg_dirty_mult;
size_t pactive;
size_t pdirty;
arena_stats_t astats;
/* Aggregate stats for small size classes, based on bin stats. */
size_t allocated_small;
uint64_t nmalloc_small;
uint64_t ndalloc_small;
uint64_t nrequests_small;
malloc_bin_stats_t bstats[NBINS];
malloc_large_stats_t *lstats; /* nlclasses elements. */
malloc_huge_stats_t *hstats; /* nhclasses elements. */
};
struct ctl_stats_s {
size_t allocated;
size_t active;
size_t metadata;
size_t resident;
size_t mapped;
unsigned narenas;
ctl_arena_stats_t *arenas; /* (narenas + 1) elements. */
};
#endif /* JEMALLOC_H_STRUCTS */
/******************************************************************************/
#ifdef JEMALLOC_H_EXTERNS
int ctl_byname(const char *name, void *oldp, size_t *oldlenp, void *newp,
size_t newlen);
int ctl_nametomib(const char *name, size_t *mibp, size_t *miblenp);
int ctl_bymib(const size_t *mib, size_t miblen, void *oldp, size_t *oldlenp,
void *newp, size_t newlen);
bool ctl_boot(void);
void ctl_prefork(void);
void ctl_postfork_parent(void);
void ctl_postfork_child(void);
#define xmallctl(name, oldp, oldlenp, newp, newlen) do { \
if (je_mallctl(name, oldp, oldlenp, newp, newlen) \
!= 0) { \
malloc_printf( \
"<jemalloc>: Failure in xmallctl(\"%s\", ...)\n", \
name); \
abort(); \
} \
} while (0)
#define xmallctlnametomib(name, mibp, miblenp) do { \
if (je_mallctlnametomib(name, mibp, miblenp) != 0) { \
malloc_printf("<jemalloc>: Failure in " \
"xmallctlnametomib(\"%s\", ...)\n", name); \
abort(); \
} \
} while (0)
#define xmallctlbymib(mib, miblen, oldp, oldlenp, newp, newlen) do { \
if (je_mallctlbymib(mib, miblen, oldp, oldlenp, newp, \
newlen) != 0) { \
malloc_write( \
"<jemalloc>: Failure in xmallctlbymib()\n"); \
abort(); \
} \
} while (0)
#endif /* JEMALLOC_H_EXTERNS */
/******************************************************************************/
#ifdef JEMALLOC_H_INLINES
#endif /* JEMALLOC_H_INLINES */
/******************************************************************************/

View File

@ -1,239 +0,0 @@
/******************************************************************************/
#ifdef JEMALLOC_H_TYPES
typedef struct extent_node_s extent_node_t;
#endif /* JEMALLOC_H_TYPES */
/******************************************************************************/
#ifdef JEMALLOC_H_STRUCTS
/* Tree of extents. Use accessor functions for en_* fields. */
struct extent_node_s {
/* Arena from which this extent came, if any. */
arena_t *en_arena;
/* Pointer to the extent that this tree node is responsible for. */
void *en_addr;
/* Total region size. */
size_t en_size;
/*
* The zeroed flag is used by chunk recycling code to track whether
* memory is zero-filled.
*/
bool en_zeroed;
/*
* True if physical memory is committed to the extent, whether
* explicitly or implicitly as on a system that overcommits and
* satisfies physical memory needs on demand via soft page faults.
*/
bool en_committed;
/*
* The achunk flag is used to validate that huge allocation lookups
* don't return arena chunks.
*/
bool en_achunk;
/* Profile counters, used for huge objects. */
prof_tctx_t *en_prof_tctx;
/* Linkage for arena's runs_dirty and chunks_cache rings. */
arena_runs_dirty_link_t rd;
qr(extent_node_t) cc_link;
union {
/* Linkage for the size/address-ordered tree. */
rb_node(extent_node_t) szad_link;
/* Linkage for arena's huge and node_cache lists. */
ql_elm(extent_node_t) ql_link;
};
/* Linkage for the address-ordered tree. */
rb_node(extent_node_t) ad_link;
};
typedef rb_tree(extent_node_t) extent_tree_t;
#endif /* JEMALLOC_H_STRUCTS */
/******************************************************************************/
#ifdef JEMALLOC_H_EXTERNS
rb_proto(, extent_tree_szad_, extent_tree_t, extent_node_t)
rb_proto(, extent_tree_ad_, extent_tree_t, extent_node_t)
#endif /* JEMALLOC_H_EXTERNS */
/******************************************************************************/
#ifdef JEMALLOC_H_INLINES
#ifndef JEMALLOC_ENABLE_INLINE
arena_t *extent_node_arena_get(const extent_node_t *node);
void *extent_node_addr_get(const extent_node_t *node);
size_t extent_node_size_get(const extent_node_t *node);
bool extent_node_zeroed_get(const extent_node_t *node);
bool extent_node_committed_get(const extent_node_t *node);
bool extent_node_achunk_get(const extent_node_t *node);
prof_tctx_t *extent_node_prof_tctx_get(const extent_node_t *node);
void extent_node_arena_set(extent_node_t *node, arena_t *arena);
void extent_node_addr_set(extent_node_t *node, void *addr);
void extent_node_size_set(extent_node_t *node, size_t size);
void extent_node_zeroed_set(extent_node_t *node, bool zeroed);
void extent_node_committed_set(extent_node_t *node, bool committed);
void extent_node_achunk_set(extent_node_t *node, bool achunk);
void extent_node_prof_tctx_set(extent_node_t *node, prof_tctx_t *tctx);
void extent_node_init(extent_node_t *node, arena_t *arena, void *addr,
size_t size, bool zeroed, bool committed);
void extent_node_dirty_linkage_init(extent_node_t *node);
void extent_node_dirty_insert(extent_node_t *node,
arena_runs_dirty_link_t *runs_dirty, extent_node_t *chunks_dirty);
void extent_node_dirty_remove(extent_node_t *node);
#endif
#if (defined(JEMALLOC_ENABLE_INLINE) || defined(JEMALLOC_EXTENT_C_))
JEMALLOC_INLINE arena_t *
extent_node_arena_get(const extent_node_t *node)
{
return (node->en_arena);
}
JEMALLOC_INLINE void *
extent_node_addr_get(const extent_node_t *node)
{
return (node->en_addr);
}
JEMALLOC_INLINE size_t
extent_node_size_get(const extent_node_t *node)
{
return (node->en_size);
}
JEMALLOC_INLINE bool
extent_node_zeroed_get(const extent_node_t *node)
{
return (node->en_zeroed);
}
JEMALLOC_INLINE bool
extent_node_committed_get(const extent_node_t *node)
{
assert(!node->en_achunk);
return (node->en_committed);
}
JEMALLOC_INLINE bool
extent_node_achunk_get(const extent_node_t *node)
{
return (node->en_achunk);
}
JEMALLOC_INLINE prof_tctx_t *
extent_node_prof_tctx_get(const extent_node_t *node)
{
return (node->en_prof_tctx);
}
JEMALLOC_INLINE void
extent_node_arena_set(extent_node_t *node, arena_t *arena)
{
node->en_arena = arena;
}
JEMALLOC_INLINE void
extent_node_addr_set(extent_node_t *node, void *addr)
{
node->en_addr = addr;
}
JEMALLOC_INLINE void
extent_node_size_set(extent_node_t *node, size_t size)
{
node->en_size = size;
}
JEMALLOC_INLINE void
extent_node_zeroed_set(extent_node_t *node, bool zeroed)
{
node->en_zeroed = zeroed;
}
JEMALLOC_INLINE void
extent_node_committed_set(extent_node_t *node, bool committed)
{
node->en_committed = committed;
}
JEMALLOC_INLINE void
extent_node_achunk_set(extent_node_t *node, bool achunk)
{
node->en_achunk = achunk;
}
JEMALLOC_INLINE void
extent_node_prof_tctx_set(extent_node_t *node, prof_tctx_t *tctx)
{
node->en_prof_tctx = tctx;
}
JEMALLOC_INLINE void
extent_node_init(extent_node_t *node, arena_t *arena, void *addr, size_t size,
bool zeroed, bool committed)
{
extent_node_arena_set(node, arena);
extent_node_addr_set(node, addr);
extent_node_size_set(node, size);
extent_node_zeroed_set(node, zeroed);
extent_node_committed_set(node, committed);
extent_node_achunk_set(node, false);
if (config_prof)
extent_node_prof_tctx_set(node, NULL);
}
JEMALLOC_INLINE void
extent_node_dirty_linkage_init(extent_node_t *node)
{
qr_new(&node->rd, rd_link);
qr_new(node, cc_link);
}
JEMALLOC_INLINE void
extent_node_dirty_insert(extent_node_t *node,
arena_runs_dirty_link_t *runs_dirty, extent_node_t *chunks_dirty)
{
qr_meld(runs_dirty, &node->rd, rd_link);
qr_meld(chunks_dirty, node, cc_link);
}
JEMALLOC_INLINE void
extent_node_dirty_remove(extent_node_t *node)
{
qr_remove(&node->rd, rd_link);
qr_remove(node, cc_link);
}
#endif
#endif /* JEMALLOC_H_INLINES */
/******************************************************************************/

View File

@ -1,336 +0,0 @@
/*
* The following hash function is based on MurmurHash3, placed into the public
* domain by Austin Appleby. See http://code.google.com/p/smhasher/ for
* details.
*/
/******************************************************************************/
#ifdef JEMALLOC_H_TYPES
#endif /* JEMALLOC_H_TYPES */
/******************************************************************************/
#ifdef JEMALLOC_H_STRUCTS
#endif /* JEMALLOC_H_STRUCTS */
/******************************************************************************/
#ifdef JEMALLOC_H_EXTERNS
#endif /* JEMALLOC_H_EXTERNS */
/******************************************************************************/
#ifdef JEMALLOC_H_INLINES
#ifndef JEMALLOC_ENABLE_INLINE
uint32_t hash_x86_32(const void *key, int len, uint32_t seed);
void hash_x86_128(const void *key, const int len, uint32_t seed,
uint64_t r_out[2]);
void hash_x64_128(const void *key, const int len, const uint32_t seed,
uint64_t r_out[2]);
void hash(const void *key, size_t len, const uint32_t seed,
size_t r_hash[2]);
#endif
#if (defined(JEMALLOC_ENABLE_INLINE) || defined(JEMALLOC_HASH_C_))
/******************************************************************************/
/* Internal implementation. */
JEMALLOC_INLINE uint32_t
hash_rotl_32(uint32_t x, int8_t r)
{
return ((x << r) | (x >> (32 - r)));
}
JEMALLOC_INLINE uint64_t
hash_rotl_64(uint64_t x, int8_t r)
{
return ((x << r) | (x >> (64 - r)));
}
JEMALLOC_INLINE uint32_t
hash_get_block_32(const uint32_t *p, int i)
{
return (p[i]);
}
JEMALLOC_INLINE uint64_t
hash_get_block_64(const uint64_t *p, int i)
{
return (p[i]);
}
JEMALLOC_INLINE uint32_t
hash_fmix_32(uint32_t h)
{
h ^= h >> 16;
h *= 0x85ebca6b;
h ^= h >> 13;
h *= 0xc2b2ae35;
h ^= h >> 16;
return (h);
}
JEMALLOC_INLINE uint64_t
hash_fmix_64(uint64_t k)
{
k ^= k >> 33;
k *= KQU(0xff51afd7ed558ccd);
k ^= k >> 33;
k *= KQU(0xc4ceb9fe1a85ec53);
k ^= k >> 33;
return (k);
}
JEMALLOC_INLINE uint32_t
hash_x86_32(const void *key, int len, uint32_t seed)
{
const uint8_t *data = (const uint8_t *) key;
const int nblocks = len / 4;
uint32_t h1 = seed;
const uint32_t c1 = 0xcc9e2d51;
const uint32_t c2 = 0x1b873593;
/* body */
{
const uint32_t *blocks = (const uint32_t *) (data + nblocks*4);
int i;
for (i = -nblocks; i; i++) {
uint32_t k1 = hash_get_block_32(blocks, i);
k1 *= c1;
k1 = hash_rotl_32(k1, 15);
k1 *= c2;
h1 ^= k1;
h1 = hash_rotl_32(h1, 13);
h1 = h1*5 + 0xe6546b64;
}
}
/* tail */
{
const uint8_t *tail = (const uint8_t *) (data + nblocks*4);
uint32_t k1 = 0;
switch (len & 3) {
case 3: k1 ^= tail[2] << 16;
case 2: k1 ^= tail[1] << 8;
case 1: k1 ^= tail[0]; k1 *= c1; k1 = hash_rotl_32(k1, 15);
k1 *= c2; h1 ^= k1;
}
}
/* finalization */
h1 ^= len;
h1 = hash_fmix_32(h1);
return (h1);
}
UNUSED JEMALLOC_INLINE void
hash_x86_128(const void *key, const int len, uint32_t seed,
uint64_t r_out[2])
{
const uint8_t * data = (const uint8_t *) key;
const int nblocks = len / 16;
uint32_t h1 = seed;
uint32_t h2 = seed;
uint32_t h3 = seed;
uint32_t h4 = seed;
const uint32_t c1 = 0x239b961b;
const uint32_t c2 = 0xab0e9789;
const uint32_t c3 = 0x38b34ae5;
const uint32_t c4 = 0xa1e38b93;
/* body */
{
const uint32_t *blocks = (const uint32_t *) (data + nblocks*16);
int i;
for (i = -nblocks; i; i++) {
uint32_t k1 = hash_get_block_32(blocks, i*4 + 0);
uint32_t k2 = hash_get_block_32(blocks, i*4 + 1);
uint32_t k3 = hash_get_block_32(blocks, i*4 + 2);
uint32_t k4 = hash_get_block_32(blocks, i*4 + 3);
k1 *= c1; k1 = hash_rotl_32(k1, 15); k1 *= c2; h1 ^= k1;
h1 = hash_rotl_32(h1, 19); h1 += h2;
h1 = h1*5 + 0x561ccd1b;
k2 *= c2; k2 = hash_rotl_32(k2, 16); k2 *= c3; h2 ^= k2;
h2 = hash_rotl_32(h2, 17); h2 += h3;
h2 = h2*5 + 0x0bcaa747;
k3 *= c3; k3 = hash_rotl_32(k3, 17); k3 *= c4; h3 ^= k3;
h3 = hash_rotl_32(h3, 15); h3 += h4;
h3 = h3*5 + 0x96cd1c35;
k4 *= c4; k4 = hash_rotl_32(k4, 18); k4 *= c1; h4 ^= k4;
h4 = hash_rotl_32(h4, 13); h4 += h1;
h4 = h4*5 + 0x32ac3b17;
}
}
/* tail */
{
const uint8_t *tail = (const uint8_t *) (data + nblocks*16);
uint32_t k1 = 0;
uint32_t k2 = 0;
uint32_t k3 = 0;
uint32_t k4 = 0;
switch (len & 15) {
case 15: k4 ^= tail[14] << 16;
case 14: k4 ^= tail[13] << 8;
case 13: k4 ^= tail[12] << 0;
k4 *= c4; k4 = hash_rotl_32(k4, 18); k4 *= c1; h4 ^= k4;
case 12: k3 ^= tail[11] << 24;
case 11: k3 ^= tail[10] << 16;
case 10: k3 ^= tail[ 9] << 8;
case 9: k3 ^= tail[ 8] << 0;
k3 *= c3; k3 = hash_rotl_32(k3, 17); k3 *= c4; h3 ^= k3;
case 8: k2 ^= tail[ 7] << 24;
case 7: k2 ^= tail[ 6] << 16;
case 6: k2 ^= tail[ 5] << 8;
case 5: k2 ^= tail[ 4] << 0;
k2 *= c2; k2 = hash_rotl_32(k2, 16); k2 *= c3; h2 ^= k2;
case 4: k1 ^= tail[ 3] << 24;
case 3: k1 ^= tail[ 2] << 16;
case 2: k1 ^= tail[ 1] << 8;
case 1: k1 ^= tail[ 0] << 0;
k1 *= c1; k1 = hash_rotl_32(k1, 15); k1 *= c2; h1 ^= k1;
}
}
/* finalization */
h1 ^= len; h2 ^= len; h3 ^= len; h4 ^= len;
h1 += h2; h1 += h3; h1 += h4;
h2 += h1; h3 += h1; h4 += h1;
h1 = hash_fmix_32(h1);
h2 = hash_fmix_32(h2);
h3 = hash_fmix_32(h3);
h4 = hash_fmix_32(h4);
h1 += h2; h1 += h3; h1 += h4;
h2 += h1; h3 += h1; h4 += h1;
r_out[0] = (((uint64_t) h2) << 32) | h1;
r_out[1] = (((uint64_t) h4) << 32) | h3;
}
UNUSED JEMALLOC_INLINE void
hash_x64_128(const void *key, const int len, const uint32_t seed,
uint64_t r_out[2])
{
const uint8_t *data = (const uint8_t *) key;
const int nblocks = len / 16;
uint64_t h1 = seed;
uint64_t h2 = seed;
const uint64_t c1 = KQU(0x87c37b91114253d5);
const uint64_t c2 = KQU(0x4cf5ad432745937f);
/* body */
{
const uint64_t *blocks = (const uint64_t *) (data);
int i;
for (i = 0; i < nblocks; i++) {
uint64_t k1 = hash_get_block_64(blocks, i*2 + 0);
uint64_t k2 = hash_get_block_64(blocks, i*2 + 1);
k1 *= c1; k1 = hash_rotl_64(k1, 31); k1 *= c2; h1 ^= k1;
h1 = hash_rotl_64(h1, 27); h1 += h2;
h1 = h1*5 + 0x52dce729;
k2 *= c2; k2 = hash_rotl_64(k2, 33); k2 *= c1; h2 ^= k2;
h2 = hash_rotl_64(h2, 31); h2 += h1;
h2 = h2*5 + 0x38495ab5;
}
}
/* tail */
{
const uint8_t *tail = (const uint8_t*)(data + nblocks*16);
uint64_t k1 = 0;
uint64_t k2 = 0;
switch (len & 15) {
case 15: k2 ^= ((uint64_t)(tail[14])) << 48;
case 14: k2 ^= ((uint64_t)(tail[13])) << 40;
case 13: k2 ^= ((uint64_t)(tail[12])) << 32;
case 12: k2 ^= ((uint64_t)(tail[11])) << 24;
case 11: k2 ^= ((uint64_t)(tail[10])) << 16;
case 10: k2 ^= ((uint64_t)(tail[ 9])) << 8;
case 9: k2 ^= ((uint64_t)(tail[ 8])) << 0;
k2 *= c2; k2 = hash_rotl_64(k2, 33); k2 *= c1; h2 ^= k2;
case 8: k1 ^= ((uint64_t)(tail[ 7])) << 56;
case 7: k1 ^= ((uint64_t)(tail[ 6])) << 48;
case 6: k1 ^= ((uint64_t)(tail[ 5])) << 40;
case 5: k1 ^= ((uint64_t)(tail[ 4])) << 32;
case 4: k1 ^= ((uint64_t)(tail[ 3])) << 24;
case 3: k1 ^= ((uint64_t)(tail[ 2])) << 16;
case 2: k1 ^= ((uint64_t)(tail[ 1])) << 8;
case 1: k1 ^= ((uint64_t)(tail[ 0])) << 0;
k1 *= c1; k1 = hash_rotl_64(k1, 31); k1 *= c2; h1 ^= k1;
}
}
/* finalization */
h1 ^= len; h2 ^= len;
h1 += h2;
h2 += h1;
h1 = hash_fmix_64(h1);
h2 = hash_fmix_64(h2);
h1 += h2;
h2 += h1;
r_out[0] = h1;
r_out[1] = h2;
}
/******************************************************************************/
/* API. */
JEMALLOC_INLINE void
hash(const void *key, size_t len, const uint32_t seed, size_t r_hash[2])
{
#if (LG_SIZEOF_PTR == 3 && !defined(JEMALLOC_BIG_ENDIAN))
hash_x64_128(key, len, seed, (uint64_t *)r_hash);
#else
uint64_t hashes[2];
hash_x86_128(key, len, seed, hashes);
r_hash[0] = (size_t)hashes[0];
r_hash[1] = (size_t)hashes[1];
#endif
}
#endif
#endif /* JEMALLOC_H_INLINES */
/******************************************************************************/

View File

@ -1,36 +0,0 @@
/******************************************************************************/
#ifdef JEMALLOC_H_TYPES
#endif /* JEMALLOC_H_TYPES */
/******************************************************************************/
#ifdef JEMALLOC_H_STRUCTS
#endif /* JEMALLOC_H_STRUCTS */
/******************************************************************************/
#ifdef JEMALLOC_H_EXTERNS
void *huge_malloc(tsd_t *tsd, arena_t *arena, size_t size, bool zero,
tcache_t *tcache);
void *huge_palloc(tsd_t *tsd, arena_t *arena, size_t size, size_t alignment,
bool zero, tcache_t *tcache);
bool huge_ralloc_no_move(void *ptr, size_t oldsize, size_t usize_min,
size_t usize_max, bool zero);
void *huge_ralloc(tsd_t *tsd, arena_t *arena, void *ptr, size_t oldsize,
size_t usize, size_t alignment, bool zero, tcache_t *tcache);
#ifdef JEMALLOC_JET
typedef void (huge_dalloc_junk_t)(void *, size_t);
extern huge_dalloc_junk_t *huge_dalloc_junk;
#endif
void huge_dalloc(tsd_t *tsd, void *ptr, tcache_t *tcache);
arena_t *huge_aalloc(const void *ptr);
size_t huge_salloc(const void *ptr);
prof_tctx_t *huge_prof_tctx_get(const void *ptr);
void huge_prof_tctx_set(const void *ptr, prof_tctx_t *tctx);
void huge_prof_tctx_reset(const void *ptr);
#endif /* JEMALLOC_H_EXTERNS */
/******************************************************************************/
#ifdef JEMALLOC_H_INLINES
#endif /* JEMALLOC_H_INLINES */
/******************************************************************************/

View File

@ -1,64 +0,0 @@
#ifndef JEMALLOC_INTERNAL_DECLS_H
#define JEMALLOC_INTERNAL_DECLS_H
#include <math.h>
#ifdef _WIN32
# include <windows.h>
# include "msvc_compat/windows_extra.h"
#else
# include <sys/param.h>
# include <sys/mman.h>
# if !defined(__pnacl__) && !defined(__native_client__)
# include <sys/syscall.h>
# if !defined(SYS_write) && defined(__NR_write)
# define SYS_write __NR_write
# endif
# include <sys/uio.h>
# endif
# include <pthread.h>
# include <errno.h>
#endif
#include <sys/types.h>
#include <limits.h>
#ifndef SIZE_T_MAX
# define SIZE_T_MAX SIZE_MAX
#endif
#include <stdarg.h>
#include <stdbool.h>
#include <stdio.h>
#include <stdlib.h>
#include <stdint.h>
#include <stddef.h>
#ifndef offsetof
# define offsetof(type, member) ((size_t)&(((type *)NULL)->member))
#endif
#include <string.h>
#include <strings.h>
#include <ctype.h>
#ifdef _MSC_VER
# include <io.h>
typedef intptr_t ssize_t;
# define PATH_MAX 1024
# define STDERR_FILENO 2
# define __func__ __FUNCTION__
# ifdef JEMALLOC_HAS_RESTRICT
# define restrict __restrict
# endif
/* Disable warnings about deprecated system functions. */
# pragma warning(disable: 4996)
#if _MSC_VER < 1800
static int
isblank(int c)
{
return (c == '\t' || c == ' ');
}
#endif
#else
# include <unistd.h>
#endif
#include <fcntl.h>
#endif /* JEMALLOC_INTERNAL_H */

View File

@ -1,262 +0,0 @@
#ifndef JEMALLOC_INTERNAL_DEFS_H_
#define JEMALLOC_INTERNAL_DEFS_H_
/*
* If JEMALLOC_PREFIX is defined via --with-jemalloc-prefix, it will cause all
* public APIs to be prefixed. This makes it possible, with some care, to use
* multiple allocators simultaneously.
*/
#undef JEMALLOC_PREFIX
#undef JEMALLOC_CPREFIX
/*
* JEMALLOC_PRIVATE_NAMESPACE is used as a prefix for all library-private APIs.
* For shared libraries, symbol visibility mechanisms prevent these symbols
* from being exported, but for static libraries, naming collisions are a real
* possibility.
*/
#undef JEMALLOC_PRIVATE_NAMESPACE
/*
* Hyper-threaded CPUs may need a special instruction inside spin loops in
* order to yield to another virtual CPU.
*/
#undef CPU_SPINWAIT
/* Defined if C11 atomics are available. */
#undef JEMALLOC_C11ATOMICS
/* Defined if the equivalent of FreeBSD's atomic(9) functions are available. */
#undef JEMALLOC_ATOMIC9
/*
* Defined if OSAtomic*() functions are available, as provided by Darwin, and
* documented in the atomic(3) manual page.
*/
#undef JEMALLOC_OSATOMIC
/*
* Defined if __sync_add_and_fetch(uint32_t *, uint32_t) and
* __sync_sub_and_fetch(uint32_t *, uint32_t) are available, despite
* __GCC_HAVE_SYNC_COMPARE_AND_SWAP_4 not being defined (which means the
* functions are defined in libgcc instead of being inlines).
*/
#undef JE_FORCE_SYNC_COMPARE_AND_SWAP_4
/*
* Defined if __sync_add_and_fetch(uint64_t *, uint64_t) and
* __sync_sub_and_fetch(uint64_t *, uint64_t) are available, despite
* __GCC_HAVE_SYNC_COMPARE_AND_SWAP_8 not being defined (which means the
* functions are defined in libgcc instead of being inlines).
*/
#undef JE_FORCE_SYNC_COMPARE_AND_SWAP_8
/*
* Defined if __builtin_clz() and __builtin_clzl() are available.
*/
#undef JEMALLOC_HAVE_BUILTIN_CLZ
/*
* Defined if madvise(2) is available.
*/
#undef JEMALLOC_HAVE_MADVISE
/*
* Defined if OSSpin*() functions are available, as provided by Darwin, and
* documented in the spinlock(3) manual page.
*/
#undef JEMALLOC_OSSPIN
/*
* Defined if secure_getenv(3) is available.
*/
#undef JEMALLOC_HAVE_SECURE_GETENV
/*
* Defined if issetugid(2) is available.
*/
#undef JEMALLOC_HAVE_ISSETUGID
/*
* Defined if _malloc_thread_cleanup() exists. At least in the case of
* FreeBSD, pthread_key_create() allocates, which if used during malloc
* bootstrapping will cause recursion into the pthreads library. Therefore, if
* _malloc_thread_cleanup() exists, use it as the basis for thread cleanup in
* malloc_tsd.
*/
#undef JEMALLOC_MALLOC_THREAD_CLEANUP
/*
* Defined if threaded initialization is known to be safe on this platform.
* Among other things, it must be possible to initialize a mutex without
* triggering allocation in order for threaded allocation to be safe.
*/
#undef JEMALLOC_THREADED_INIT
/*
* Defined if the pthreads implementation defines
* _pthread_mutex_init_calloc_cb(), in which case the function is used in order
* to avoid recursive allocation during mutex initialization.
*/
#undef JEMALLOC_MUTEX_INIT_CB
/* Non-empty if the tls_model attribute is supported. */
#undef JEMALLOC_TLS_MODEL
/* JEMALLOC_CC_SILENCE enables code that silences unuseful compiler warnings. */
#undef JEMALLOC_CC_SILENCE
/* JEMALLOC_CODE_COVERAGE enables test code coverage analysis. */
#undef JEMALLOC_CODE_COVERAGE
/*
* JEMALLOC_DEBUG enables assertions and other sanity checks, and disables
* inline functions.
*/
#undef JEMALLOC_DEBUG
/* JEMALLOC_STATS enables statistics calculation. */
#undef JEMALLOC_STATS
/* JEMALLOC_PROF enables allocation profiling. */
#undef JEMALLOC_PROF
/* Use libunwind for profile backtracing if defined. */
#undef JEMALLOC_PROF_LIBUNWIND
/* Use libgcc for profile backtracing if defined. */
#undef JEMALLOC_PROF_LIBGCC
/* Use gcc intrinsics for profile backtracing if defined. */
#undef JEMALLOC_PROF_GCC
/*
* JEMALLOC_TCACHE enables a thread-specific caching layer for small objects.
* This makes it possible to allocate/deallocate objects without any locking
* when the cache is in the steady state.
*/
#undef JEMALLOC_TCACHE
/*
* JEMALLOC_DSS enables use of sbrk(2) to allocate chunks from the data storage
* segment (DSS).
*/
#undef JEMALLOC_DSS
/* Support memory filling (junk/zero/quarantine/redzone). */
#undef JEMALLOC_FILL
/* Support utrace(2)-based tracing. */
#undef JEMALLOC_UTRACE
/* Support Valgrind. */
#undef JEMALLOC_VALGRIND
/* Support optional abort() on OOM. */
#undef JEMALLOC_XMALLOC
/* Support lazy locking (avoid locking unless a second thread is launched). */
#undef JEMALLOC_LAZY_LOCK
/* Minimum size class to support is 2^LG_TINY_MIN bytes. */
#undef LG_TINY_MIN
/*
* Minimum allocation alignment is 2^LG_QUANTUM bytes (ignoring tiny size
* classes).
*/
#undef LG_QUANTUM
/* One page is 2^LG_PAGE bytes. */
#undef LG_PAGE
/*
* If defined, adjacent virtual memory mappings with identical attributes
* automatically coalesce, and they fragment when changes are made to subranges.
* This is the normal order of things for mmap()/munmap(), but on Windows
* VirtualAlloc()/VirtualFree() operations must be precisely matched, i.e.
* mappings do *not* coalesce/fragment.
*/
#undef JEMALLOC_MAPS_COALESCE
/*
* If defined, use munmap() to unmap freed chunks, rather than storing them for
* later reuse. This is disabled by default on Linux because common sequences
* of mmap()/munmap() calls will cause virtual memory map holes.
*/
#undef JEMALLOC_MUNMAP
/* TLS is used to map arenas and magazine caches to threads. */
#undef JEMALLOC_TLS
/*
* ffs()/ffsl() functions to use for bitmapping. Don't use these directly;
* instead, use jemalloc_ffs() or jemalloc_ffsl() from util.h.
*/
#undef JEMALLOC_INTERNAL_FFSL
#undef JEMALLOC_INTERNAL_FFS
/*
* JEMALLOC_IVSALLOC enables ivsalloc(), which verifies that pointers reside
* within jemalloc-owned chunks before dereferencing them.
*/
#undef JEMALLOC_IVSALLOC
/*
* If defined, explicitly attempt to more uniformly distribute large allocation
* pointer alignments across all cache indices.
*/
#undef JEMALLOC_CACHE_OBLIVIOUS
/*
* Darwin (OS X) uses zones to work around Mach-O symbol override shortcomings.
*/
#undef JEMALLOC_ZONE
#undef JEMALLOC_ZONE_VERSION
/*
* Methods for purging unused pages differ between operating systems.
*
* madvise(..., MADV_DONTNEED) : On Linux, this immediately discards pages,
* such that new pages will be demand-zeroed if
* the address region is later touched.
* madvise(..., MADV_FREE) : On FreeBSD and Darwin, this marks pages as being
* unused, such that they will be discarded rather
* than swapped out.
*/
#undef JEMALLOC_PURGE_MADVISE_DONTNEED
#undef JEMALLOC_PURGE_MADVISE_FREE
/* Define if operating system has alloca.h header. */
#undef JEMALLOC_HAS_ALLOCA_H
/* C99 restrict keyword supported. */
#undef JEMALLOC_HAS_RESTRICT
/* For use by hash code. */
#undef JEMALLOC_BIG_ENDIAN
/* sizeof(int) == 2^LG_SIZEOF_INT. */
#undef LG_SIZEOF_INT
/* sizeof(long) == 2^LG_SIZEOF_LONG. */
#undef LG_SIZEOF_LONG
/* sizeof(intmax_t) == 2^LG_SIZEOF_INTMAX_T. */
#undef LG_SIZEOF_INTMAX_T
/* glibc malloc hooks (__malloc_hook, __realloc_hook, __free_hook). */
#undef JEMALLOC_GLIBC_MALLOC_HOOK
/* glibc memalign hook. */
#undef JEMALLOC_GLIBC_MEMALIGN_HOOK
/* Adaptive mutex support in pthreads. */
#undef JEMALLOC_HAVE_PTHREAD_MUTEX_ADAPTIVE_NP
/*
* If defined, jemalloc symbols are not exported (doesn't work when
* JEMALLOC_PREFIX is not defined).
*/
#undef JEMALLOC_EXPORT
#endif /* JEMALLOC_INTERNAL_DEFS_H_ */

View File

@ -1,57 +0,0 @@
/*
* JEMALLOC_ALWAYS_INLINE and JEMALLOC_INLINE are used within header files for
* functions that are static inline functions if inlining is enabled, and
* single-definition library-private functions if inlining is disabled.
*
* JEMALLOC_ALWAYS_INLINE_C and JEMALLOC_INLINE_C are for use in .c files, in
* which case the denoted functions are always static, regardless of whether
* inlining is enabled.
*/
#if defined(JEMALLOC_DEBUG) || defined(JEMALLOC_CODE_COVERAGE)
/* Disable inlining to make debugging/profiling easier. */
# define JEMALLOC_ALWAYS_INLINE
# define JEMALLOC_ALWAYS_INLINE_C static
# define JEMALLOC_INLINE
# define JEMALLOC_INLINE_C static
# define inline
#else
# define JEMALLOC_ENABLE_INLINE
# ifdef JEMALLOC_HAVE_ATTR
# define JEMALLOC_ALWAYS_INLINE \
static inline JEMALLOC_ATTR(unused) JEMALLOC_ATTR(always_inline)
# define JEMALLOC_ALWAYS_INLINE_C \
static inline JEMALLOC_ATTR(always_inline)
# else
# define JEMALLOC_ALWAYS_INLINE static inline
# define JEMALLOC_ALWAYS_INLINE_C static inline
# endif
# define JEMALLOC_INLINE static inline
# define JEMALLOC_INLINE_C static inline
# ifdef _MSC_VER
# define inline _inline
# endif
#endif
#ifdef JEMALLOC_CC_SILENCE
# define UNUSED JEMALLOC_ATTR(unused)
#else
# define UNUSED
#endif
#define ZU(z) ((size_t)z)
#define ZI(z) ((ssize_t)z)
#define QU(q) ((uint64_t)q)
#define QI(q) ((int64_t)q)
#define KZU(z) ZU(z##ULL)
#define KZI(z) ZI(z##LL)
#define KQU(q) QU(q##ULL)
#define KQI(q) QI(q##LL)
#ifndef __DECONST
# define __DECONST(type, var) ((type)(uintptr_t)(const void *)(var))
#endif
#ifndef JEMALLOC_HAS_RESTRICT
# define restrict
#endif

View File

@ -1,115 +0,0 @@
/******************************************************************************/
#ifdef JEMALLOC_H_TYPES
#endif /* JEMALLOC_H_TYPES */
/******************************************************************************/
#ifdef JEMALLOC_H_STRUCTS
#endif /* JEMALLOC_H_STRUCTS */
/******************************************************************************/
#ifdef JEMALLOC_H_EXTERNS
#endif /* JEMALLOC_H_EXTERNS */
/******************************************************************************/
#ifdef JEMALLOC_H_INLINES
#ifndef JEMALLOC_ENABLE_INLINE
void mb_write(void);
#endif
#if (defined(JEMALLOC_ENABLE_INLINE) || defined(JEMALLOC_MB_C_))
#ifdef __i386__
/*
* According to the Intel Architecture Software Developer's Manual, current
* processors execute instructions in order from the perspective of other
* processors in a multiprocessor system, but 1) Intel reserves the right to
* change that, and 2) the compiler's optimizer could re-order instructions if
* there weren't some form of barrier. Therefore, even if running on an
* architecture that does not need memory barriers (everything through at least
* i686), an "optimizer barrier" is necessary.
*/
JEMALLOC_INLINE void
mb_write(void)
{
# if 0
/* This is a true memory barrier. */
asm volatile ("pusha;"
"xor %%eax,%%eax;"
"cpuid;"
"popa;"
: /* Outputs. */
: /* Inputs. */
: "memory" /* Clobbers. */
);
#else
/*
* This is hopefully enough to keep the compiler from reordering
* instructions around this one.
*/
asm volatile ("nop;"
: /* Outputs. */
: /* Inputs. */
: "memory" /* Clobbers. */
);
#endif
}
#elif (defined(__amd64__) || defined(__x86_64__))
JEMALLOC_INLINE void
mb_write(void)
{
asm volatile ("sfence"
: /* Outputs. */
: /* Inputs. */
: "memory" /* Clobbers. */
);
}
#elif defined(__powerpc__)
JEMALLOC_INLINE void
mb_write(void)
{
asm volatile ("eieio"
: /* Outputs. */
: /* Inputs. */
: "memory" /* Clobbers. */
);
}
#elif defined(__sparc64__)
JEMALLOC_INLINE void
mb_write(void)
{
asm volatile ("membar #StoreStore"
: /* Outputs. */
: /* Inputs. */
: "memory" /* Clobbers. */
);
}
#elif defined(__tile__)
JEMALLOC_INLINE void
mb_write(void)
{
__sync_synchronize();
}
#else
/*
* This is much slower than a simple memory barrier, but the semantics of mutex
* unlock make this work.
*/
JEMALLOC_INLINE void
mb_write(void)
{
malloc_mutex_t mtx;
malloc_mutex_init(&mtx);
malloc_mutex_lock(&mtx);
malloc_mutex_unlock(&mtx);
}
#endif
#endif
#endif /* JEMALLOC_H_INLINES */
/******************************************************************************/

View File

@ -1,111 +0,0 @@
/******************************************************************************/
#ifdef JEMALLOC_H_TYPES
typedef struct malloc_mutex_s malloc_mutex_t;
#ifdef _WIN32
# define MALLOC_MUTEX_INITIALIZER
#elif (defined(JEMALLOC_OSSPIN))
# define MALLOC_MUTEX_INITIALIZER {0}
#elif (defined(JEMALLOC_MUTEX_INIT_CB))
# define MALLOC_MUTEX_INITIALIZER {PTHREAD_MUTEX_INITIALIZER, NULL}
#else
# if (defined(JEMALLOC_HAVE_PTHREAD_MUTEX_ADAPTIVE_NP) && \
defined(PTHREAD_ADAPTIVE_MUTEX_INITIALIZER_NP))
# define MALLOC_MUTEX_TYPE PTHREAD_MUTEX_ADAPTIVE_NP
# define MALLOC_MUTEX_INITIALIZER {PTHREAD_ADAPTIVE_MUTEX_INITIALIZER_NP}
# else
# define MALLOC_MUTEX_TYPE PTHREAD_MUTEX_DEFAULT
# define MALLOC_MUTEX_INITIALIZER {PTHREAD_MUTEX_INITIALIZER}
# endif
#endif
#endif /* JEMALLOC_H_TYPES */
/******************************************************************************/
#ifdef JEMALLOC_H_STRUCTS
struct malloc_mutex_s {
#ifdef _WIN32
# if _WIN32_WINNT >= 0x0600
SRWLOCK lock;
# else
CRITICAL_SECTION lock;
# endif
#elif (defined(JEMALLOC_OSSPIN))
OSSpinLock lock;
#elif (defined(JEMALLOC_MUTEX_INIT_CB))
pthread_mutex_t lock;
malloc_mutex_t *postponed_next;
#else
pthread_mutex_t lock;
#endif
};
#endif /* JEMALLOC_H_STRUCTS */
/******************************************************************************/
#ifdef JEMALLOC_H_EXTERNS
#ifdef JEMALLOC_LAZY_LOCK
extern bool isthreaded;
#else
# undef isthreaded /* Undo private_namespace.h definition. */
# define isthreaded true
#endif
bool malloc_mutex_init(malloc_mutex_t *mutex);
void malloc_mutex_prefork(malloc_mutex_t *mutex);
void malloc_mutex_postfork_parent(malloc_mutex_t *mutex);
void malloc_mutex_postfork_child(malloc_mutex_t *mutex);
bool mutex_boot(void);
#endif /* JEMALLOC_H_EXTERNS */
/******************************************************************************/
#ifdef JEMALLOC_H_INLINES
#ifndef JEMALLOC_ENABLE_INLINE
void malloc_mutex_lock(malloc_mutex_t *mutex);
void malloc_mutex_unlock(malloc_mutex_t *mutex);
#endif
#if (defined(JEMALLOC_ENABLE_INLINE) || defined(JEMALLOC_MUTEX_C_))
JEMALLOC_INLINE void
malloc_mutex_lock(malloc_mutex_t *mutex)
{
if (isthreaded) {
#ifdef _WIN32
# if _WIN32_WINNT >= 0x0600
AcquireSRWLockExclusive(&mutex->lock);
# else
EnterCriticalSection(&mutex->lock);
# endif
#elif (defined(JEMALLOC_OSSPIN))
OSSpinLockLock(&mutex->lock);
#else
pthread_mutex_lock(&mutex->lock);
#endif
}
}
JEMALLOC_INLINE void
malloc_mutex_unlock(malloc_mutex_t *mutex)
{
if (isthreaded) {
#ifdef _WIN32
# if _WIN32_WINNT >= 0x0600
ReleaseSRWLockExclusive(&mutex->lock);
# else
LeaveCriticalSection(&mutex->lock);
# endif
#elif (defined(JEMALLOC_OSSPIN))
OSSpinLockUnlock(&mutex->lock);
#else
pthread_mutex_unlock(&mutex->lock);
#endif
}
}
#endif
#endif /* JEMALLOC_H_INLINES */
/******************************************************************************/

View File

@ -1,26 +0,0 @@
/******************************************************************************/
#ifdef JEMALLOC_H_TYPES
#endif /* JEMALLOC_H_TYPES */
/******************************************************************************/
#ifdef JEMALLOC_H_STRUCTS
#endif /* JEMALLOC_H_STRUCTS */
/******************************************************************************/
#ifdef JEMALLOC_H_EXTERNS
void *pages_map(void *addr, size_t size);
void pages_unmap(void *addr, size_t size);
void *pages_trim(void *addr, size_t alloc_size, size_t leadsize,
size_t size);
bool pages_commit(void *addr, size_t size);
bool pages_decommit(void *addr, size_t size);
bool pages_purge(void *addr, size_t size);
#endif /* JEMALLOC_H_EXTERNS */
/******************************************************************************/
#ifdef JEMALLOC_H_INLINES
#endif /* JEMALLOC_H_INLINES */
/******************************************************************************/

View File

@ -1,5 +0,0 @@
#!/bin/sh
for symbol in `cat $1` ; do
echo "#define ${symbol} JEMALLOC_N(${symbol})"
done

View File

@ -1,499 +0,0 @@
a0dalloc
a0get
a0malloc
arena_aalloc
arena_alloc_junk_small
arena_bin_index
arena_bin_info
arena_bitselm_get
arena_boot
arena_choose
arena_choose_hard
arena_chunk_alloc_huge
arena_chunk_cache_maybe_insert
arena_chunk_cache_maybe_remove
arena_chunk_dalloc_huge
arena_chunk_ralloc_huge_expand
arena_chunk_ralloc_huge_shrink
arena_chunk_ralloc_huge_similar
arena_cleanup
arena_dalloc
arena_dalloc_bin
arena_dalloc_bin_junked_locked
arena_dalloc_junk_large
arena_dalloc_junk_small
arena_dalloc_large
arena_dalloc_large_junked_locked
arena_dalloc_small
arena_dss_prec_get
arena_dss_prec_set
arena_get
arena_get_hard
arena_init
arena_lg_dirty_mult_default_get
arena_lg_dirty_mult_default_set
arena_lg_dirty_mult_get
arena_lg_dirty_mult_set
arena_malloc
arena_malloc_large
arena_malloc_small
arena_mapbits_allocated_get
arena_mapbits_binind_get
arena_mapbits_decommitted_get
arena_mapbits_dirty_get
arena_mapbits_get
arena_mapbits_internal_set
arena_mapbits_large_binind_set
arena_mapbits_large_get
arena_mapbits_large_set
arena_mapbits_large_size_get
arena_mapbitsp_get
arena_mapbitsp_read
arena_mapbitsp_write
arena_mapbits_size_decode
arena_mapbits_size_encode
arena_mapbits_small_runind_get
arena_mapbits_small_set
arena_mapbits_unallocated_set
arena_mapbits_unallocated_size_get
arena_mapbits_unallocated_size_set
arena_mapbits_unzeroed_get
arena_maxrun
arena_maybe_purge
arena_metadata_allocated_add
arena_metadata_allocated_get
arena_metadata_allocated_sub
arena_migrate
arena_miscelm_get
arena_miscelm_to_pageind
arena_miscelm_to_rpages
arena_nbound
arena_new
arena_node_alloc
arena_node_dalloc
arena_palloc
arena_postfork_child
arena_postfork_parent
arena_prefork
arena_prof_accum
arena_prof_accum_impl
arena_prof_accum_locked
arena_prof_promoted
arena_prof_tctx_get
arena_prof_tctx_reset
arena_prof_tctx_set
arena_ptr_small_binind_get
arena_purge_all
arena_quarantine_junk_small
arena_ralloc
arena_ralloc_junk_large
arena_ralloc_no_move
arena_rd_to_miscelm
arena_redzone_corruption
arena_run_regind
arena_run_to_miscelm
arena_salloc
arenas_cache_bypass_cleanup
arenas_cache_cleanup
arena_sdalloc
arena_stats_merge
arena_tcache_fill_small
atomic_add_p
atomic_add_u
atomic_add_uint32
atomic_add_uint64
atomic_add_z
atomic_cas_p
atomic_cas_u
atomic_cas_uint32
atomic_cas_uint64
atomic_cas_z
atomic_sub_p
atomic_sub_u
atomic_sub_uint32
atomic_sub_uint64
atomic_sub_z
base_alloc
base_boot
base_postfork_child
base_postfork_parent
base_prefork
base_stats_get
bitmap_full
bitmap_get
bitmap_info_init
bitmap_info_ngroups
bitmap_init
bitmap_set
bitmap_sfu
bitmap_size
bitmap_unset
bootstrap_calloc
bootstrap_free
bootstrap_malloc
bt_init
buferror
chunk_alloc_base
chunk_alloc_cache
chunk_alloc_dss
chunk_alloc_mmap
chunk_alloc_wrapper
chunk_boot
chunk_dalloc_arena
chunk_dalloc_cache
chunk_dalloc_mmap
chunk_dalloc_wrapper
chunk_deregister
chunk_dss_boot
chunk_dss_postfork_child
chunk_dss_postfork_parent
chunk_dss_prec_get
chunk_dss_prec_set
chunk_dss_prefork
chunk_hooks_default
chunk_hooks_get
chunk_hooks_set
chunk_in_dss
chunk_lookup
chunk_npages
chunk_postfork_child
chunk_postfork_parent
chunk_prefork
chunk_purge_arena
chunk_purge_wrapper
chunk_register
chunksize
chunksize_mask
chunks_rtree
ckh_count
ckh_delete
ckh_insert
ckh_iter
ckh_new
ckh_pointer_hash
ckh_pointer_keycomp
ckh_remove
ckh_search
ckh_string_hash
ckh_string_keycomp
ctl_boot
ctl_bymib
ctl_byname
ctl_nametomib
ctl_postfork_child
ctl_postfork_parent
ctl_prefork
dss_prec_names
extent_node_achunk_get
extent_node_achunk_set
extent_node_addr_get
extent_node_addr_set
extent_node_arena_get
extent_node_arena_set
extent_node_dirty_insert
extent_node_dirty_linkage_init
extent_node_dirty_remove
extent_node_init
extent_node_prof_tctx_get
extent_node_prof_tctx_set
extent_node_size_get
extent_node_size_set
extent_node_zeroed_get
extent_node_zeroed_set
extent_tree_ad_empty
extent_tree_ad_first
extent_tree_ad_insert
extent_tree_ad_iter
extent_tree_ad_iter_recurse
extent_tree_ad_iter_start
extent_tree_ad_last
extent_tree_ad_new
extent_tree_ad_next
extent_tree_ad_nsearch
extent_tree_ad_prev
extent_tree_ad_psearch
extent_tree_ad_remove
extent_tree_ad_reverse_iter
extent_tree_ad_reverse_iter_recurse
extent_tree_ad_reverse_iter_start
extent_tree_ad_search
extent_tree_szad_empty
extent_tree_szad_first
extent_tree_szad_insert
extent_tree_szad_iter
extent_tree_szad_iter_recurse
extent_tree_szad_iter_start
extent_tree_szad_last
extent_tree_szad_new
extent_tree_szad_next
extent_tree_szad_nsearch
extent_tree_szad_prev
extent_tree_szad_psearch
extent_tree_szad_remove
extent_tree_szad_reverse_iter
extent_tree_szad_reverse_iter_recurse
extent_tree_szad_reverse_iter_start
extent_tree_szad_search
get_errno
hash
hash_fmix_32
hash_fmix_64
hash_get_block_32
hash_get_block_64
hash_rotl_32
hash_rotl_64
hash_x64_128
hash_x86_128
hash_x86_32
huge_aalloc
huge_dalloc
huge_dalloc_junk
huge_malloc
huge_palloc
huge_prof_tctx_get
huge_prof_tctx_reset
huge_prof_tctx_set
huge_ralloc
huge_ralloc_no_move
huge_salloc
iaalloc
iallocztm
icalloc
icalloct
idalloc
idalloct
idalloctm
imalloc
imalloct
index2size
index2size_compute
index2size_lookup
index2size_tab
in_valgrind
ipalloc
ipalloct
ipallocztm
iqalloc
iralloc
iralloct
iralloct_realign
isalloc
isdalloct
isqalloc
isthreaded
ivsalloc
ixalloc
jemalloc_postfork_child
jemalloc_postfork_parent
jemalloc_prefork
large_maxclass
lg_floor
malloc_cprintf
malloc_mutex_init
malloc_mutex_lock
malloc_mutex_postfork_child
malloc_mutex_postfork_parent
malloc_mutex_prefork
malloc_mutex_unlock
malloc_printf
malloc_snprintf
malloc_strtoumax
malloc_tsd_boot0
malloc_tsd_boot1
malloc_tsd_cleanup_register
malloc_tsd_dalloc
malloc_tsd_malloc
malloc_tsd_no_cleanup
malloc_vcprintf
malloc_vsnprintf
malloc_write
map_bias
map_misc_offset
mb_write
mutex_boot
narenas_cache_cleanup
narenas_total_get
ncpus
nhbins
opt_abort
opt_dss
opt_junk
opt_junk_alloc
opt_junk_free
opt_lg_chunk
opt_lg_dirty_mult
opt_lg_prof_interval
opt_lg_prof_sample
opt_lg_tcache_max
opt_narenas
opt_prof
opt_prof_accum
opt_prof_active
opt_prof_final
opt_prof_gdump
opt_prof_leak
opt_prof_prefix
opt_prof_thread_active_init
opt_quarantine
opt_redzone
opt_stats_print
opt_tcache
opt_utrace
opt_xmalloc
opt_zero
p2rz
pages_commit
pages_decommit
pages_map
pages_purge
pages_trim
pages_unmap
pow2_ceil
prof_active_get
prof_active_get_unlocked
prof_active_set
prof_alloc_prep
prof_alloc_rollback
prof_backtrace
prof_boot0
prof_boot1
prof_boot2
prof_dump_header
prof_dump_open
prof_free
prof_free_sampled_object
prof_gdump
prof_gdump_get
prof_gdump_get_unlocked
prof_gdump_set
prof_gdump_val
prof_idump
prof_interval
prof_lookup
prof_malloc
prof_malloc_sample_object
prof_mdump
prof_postfork_child
prof_postfork_parent
prof_prefork
prof_realloc
prof_reset
prof_sample_accum_update
prof_sample_threshold_update
prof_tctx_get
prof_tctx_reset
prof_tctx_set
prof_tdata_cleanup
prof_tdata_get
prof_tdata_init
prof_tdata_reinit
prof_thread_active_get
prof_thread_active_init_get
prof_thread_active_init_set
prof_thread_active_set
prof_thread_name_get
prof_thread_name_set
quarantine
quarantine_alloc_hook
quarantine_alloc_hook_work
quarantine_cleanup
register_zone
rtree_child_read
rtree_child_read_hard
rtree_child_tryread
rtree_delete
rtree_get
rtree_new
rtree_node_valid
rtree_set
rtree_start_level
rtree_subkey
rtree_subtree_read
rtree_subtree_read_hard
rtree_subtree_tryread
rtree_val_read
rtree_val_write
s2u
s2u_compute
s2u_lookup
sa2u
set_errno
size2index
size2index_compute
size2index_lookup
size2index_tab
stats_cactive
stats_cactive_add
stats_cactive_get
stats_cactive_sub
stats_print
tcache_alloc_easy
tcache_alloc_large
tcache_alloc_small
tcache_alloc_small_hard
tcache_arena_associate
tcache_arena_dissociate
tcache_arena_reassociate
tcache_bin_flush_large
tcache_bin_flush_small
tcache_bin_info
tcache_boot
tcache_cleanup
tcache_create
tcache_dalloc_large
tcache_dalloc_small
tcache_enabled_cleanup
tcache_enabled_get
tcache_enabled_set
tcache_event
tcache_event_hard
tcache_flush
tcache_get
tcache_get_hard
tcache_maxclass
tcaches
tcache_salloc
tcaches_create
tcaches_destroy
tcaches_flush
tcaches_get
tcache_stats_merge
thread_allocated_cleanup
thread_deallocated_cleanup
tsd_arena_get
tsd_arena_set
tsd_boot
tsd_boot0
tsd_boot1
tsd_booted
tsd_cleanup
tsd_cleanup_wrapper
tsd_fetch
tsd_get
tsd_wrapper_get
tsd_wrapper_set
tsd_initialized
tsd_init_check_recursion
tsd_init_finish
tsd_init_head
tsd_nominal
tsd_quarantine_get
tsd_quarantine_set
tsd_set
tsd_tcache_enabled_get
tsd_tcache_enabled_set
tsd_tcache_get
tsd_tcache_set
tsd_tls
tsd_tsd
tsd_prof_tdata_get
tsd_prof_tdata_set
tsd_thread_allocated_get
tsd_thread_allocated_set
tsd_thread_deallocated_get
tsd_thread_deallocated_set
u2rz
valgrind_freelike_block
valgrind_make_mem_defined
valgrind_make_mem_noaccess
valgrind_make_mem_undefined

View File

@ -1,5 +0,0 @@
#!/bin/sh
for symbol in `cat $1` ; do
echo "#undef ${symbol}"
done

View File

@ -1,60 +0,0 @@
/******************************************************************************/
#ifdef JEMALLOC_H_TYPES
/*
* Simple linear congruential pseudo-random number generator:
*
* prng(y) = (a*x + c) % m
*
* where the following constants ensure maximal period:
*
* a == Odd number (relatively prime to 2^n), and (a-1) is a multiple of 4.
* c == Odd number (relatively prime to 2^n).
* m == 2^32
*
* See Knuth's TAOCP 3rd Ed., Vol. 2, pg. 17 for details on these constraints.
*
* This choice of m has the disadvantage that the quality of the bits is
* proportional to bit position. For example, the lowest bit has a cycle of 2,
* the next has a cycle of 4, etc. For this reason, we prefer to use the upper
* bits.
*
* Macro parameters:
* uint32_t r : Result.
* unsigned lg_range : (0..32], number of least significant bits to return.
* uint32_t state : Seed value.
* const uint32_t a, c : See above discussion.
*/
#define prng32(r, lg_range, state, a, c) do { \
assert((lg_range) > 0); \
assert((lg_range) <= 32); \
\
r = (state * (a)) + (c); \
state = r; \
r >>= (32 - (lg_range)); \
} while (false)
/* Same as prng32(), but 64 bits of pseudo-randomness, using uint64_t. */
#define prng64(r, lg_range, state, a, c) do { \
assert((lg_range) > 0); \
assert((lg_range) <= 64); \
\
r = (state * (a)) + (c); \
state = r; \
r >>= (64 - (lg_range)); \
} while (false)
#endif /* JEMALLOC_H_TYPES */
/******************************************************************************/
#ifdef JEMALLOC_H_STRUCTS
#endif /* JEMALLOC_H_STRUCTS */
/******************************************************************************/
#ifdef JEMALLOC_H_EXTERNS
#endif /* JEMALLOC_H_EXTERNS */
/******************************************************************************/
#ifdef JEMALLOC_H_INLINES
#endif /* JEMALLOC_H_INLINES */
/******************************************************************************/

View File

@ -1,545 +0,0 @@
/******************************************************************************/
#ifdef JEMALLOC_H_TYPES
typedef struct prof_bt_s prof_bt_t;
typedef struct prof_cnt_s prof_cnt_t;
typedef struct prof_tctx_s prof_tctx_t;
typedef struct prof_gctx_s prof_gctx_t;
typedef struct prof_tdata_s prof_tdata_t;
/* Option defaults. */
#ifdef JEMALLOC_PROF
# define PROF_PREFIX_DEFAULT "jeprof"
#else
# define PROF_PREFIX_DEFAULT ""
#endif
#define LG_PROF_SAMPLE_DEFAULT 19
#define LG_PROF_INTERVAL_DEFAULT -1
/*
* Hard limit on stack backtrace depth. The version of prof_backtrace() that
* is based on __builtin_return_address() necessarily has a hard-coded number
* of backtrace frame handlers, and should be kept in sync with this setting.
*/
#define PROF_BT_MAX 128
/* Initial hash table size. */
#define PROF_CKH_MINITEMS 64
/* Size of memory buffer to use when writing dump files. */
#define PROF_DUMP_BUFSIZE 65536
/* Size of stack-allocated buffer used by prof_printf(). */
#define PROF_PRINTF_BUFSIZE 128
/*
* Number of mutexes shared among all gctx's. No space is allocated for these
* unless profiling is enabled, so it's okay to over-provision.
*/
#define PROF_NCTX_LOCKS 1024
/*
* Number of mutexes shared among all tdata's. No space is allocated for these
* unless profiling is enabled, so it's okay to over-provision.
*/
#define PROF_NTDATA_LOCKS 256
/*
* prof_tdata pointers close to NULL are used to encode state information that
* is used for cleaning up during thread shutdown.
*/
#define PROF_TDATA_STATE_REINCARNATED ((prof_tdata_t *)(uintptr_t)1)
#define PROF_TDATA_STATE_PURGATORY ((prof_tdata_t *)(uintptr_t)2)
#define PROF_TDATA_STATE_MAX PROF_TDATA_STATE_PURGATORY
#endif /* JEMALLOC_H_TYPES */
/******************************************************************************/
#ifdef JEMALLOC_H_STRUCTS
struct prof_bt_s {
/* Backtrace, stored as len program counters. */
void **vec;
unsigned len;
};
#ifdef JEMALLOC_PROF_LIBGCC
/* Data structure passed to libgcc _Unwind_Backtrace() callback functions. */
typedef struct {
prof_bt_t *bt;
unsigned max;
} prof_unwind_data_t;
#endif
struct prof_cnt_s {
/* Profiling counters. */
uint64_t curobjs;
uint64_t curbytes;
uint64_t accumobjs;
uint64_t accumbytes;
};
typedef enum {
prof_tctx_state_initializing,
prof_tctx_state_nominal,
prof_tctx_state_dumping,
prof_tctx_state_purgatory /* Dumper must finish destroying. */
} prof_tctx_state_t;
struct prof_tctx_s {
/* Thread data for thread that performed the allocation. */
prof_tdata_t *tdata;
/*
* Copy of tdata->thr_{uid,discrim}, necessary because tdata may be
* defunct during teardown.
*/
uint64_t thr_uid;
uint64_t thr_discrim;
/* Profiling counters, protected by tdata->lock. */
prof_cnt_t cnts;
/* Associated global context. */
prof_gctx_t *gctx;
/*
* UID that distinguishes multiple tctx's created by the same thread,
* but coexisting in gctx->tctxs. There are two ways that such
* coexistence can occur:
* - A dumper thread can cause a tctx to be retained in the purgatory
* state.
* - Although a single "producer" thread must create all tctx's which
* share the same thr_uid, multiple "consumers" can each concurrently
* execute portions of prof_tctx_destroy(). prof_tctx_destroy() only
* gets called once each time cnts.cur{objs,bytes} drop to 0, but this
* threshold can be hit again before the first consumer finishes
* executing prof_tctx_destroy().
*/
uint64_t tctx_uid;
/* Linkage into gctx's tctxs. */
rb_node(prof_tctx_t) tctx_link;
/*
* True during prof_alloc_prep()..prof_malloc_sample_object(), prevents
* sample vs destroy race.
*/
bool prepared;
/* Current dump-related state, protected by gctx->lock. */
prof_tctx_state_t state;
/*
* Copy of cnts snapshotted during early dump phase, protected by
* dump_mtx.
*/
prof_cnt_t dump_cnts;
};
typedef rb_tree(prof_tctx_t) prof_tctx_tree_t;
struct prof_gctx_s {
/* Protects nlimbo, cnt_summed, and tctxs. */
malloc_mutex_t *lock;
/*
* Number of threads that currently cause this gctx to be in a state of
* limbo due to one of:
* - Initializing this gctx.
* - Initializing per thread counters associated with this gctx.
* - Preparing to destroy this gctx.
* - Dumping a heap profile that includes this gctx.
* nlimbo must be 1 (single destroyer) in order to safely destroy the
* gctx.
*/
unsigned nlimbo;
/*
* Tree of profile counters, one for each thread that has allocated in
* this context.
*/
prof_tctx_tree_t tctxs;
/* Linkage for tree of contexts to be dumped. */
rb_node(prof_gctx_t) dump_link;
/* Temporary storage for summation during dump. */
prof_cnt_t cnt_summed;
/* Associated backtrace. */
prof_bt_t bt;
/* Backtrace vector, variable size, referred to by bt. */
void *vec[1];
};
typedef rb_tree(prof_gctx_t) prof_gctx_tree_t;
struct prof_tdata_s {
malloc_mutex_t *lock;
/* Monotonically increasing unique thread identifier. */
uint64_t thr_uid;
/*
* Monotonically increasing discriminator among tdata structures
* associated with the same thr_uid.
*/
uint64_t thr_discrim;
/* Included in heap profile dumps if non-NULL. */
char *thread_name;
bool attached;
bool expired;
rb_node(prof_tdata_t) tdata_link;
/*
* Counter used to initialize prof_tctx_t's tctx_uid. No locking is
* necessary when incrementing this field, because only one thread ever
* does so.
*/
uint64_t tctx_uid_next;
/*
* Hash of (prof_bt_t *)-->(prof_tctx_t *). Each thread tracks
* backtraces for which it has non-zero allocation/deallocation counters
* associated with thread-specific prof_tctx_t objects. Other threads
* may write to prof_tctx_t contents when freeing associated objects.
*/
ckh_t bt2tctx;
/* Sampling state. */
uint64_t prng_state;
uint64_t bytes_until_sample;
/* State used to avoid dumping while operating on prof internals. */
bool enq;
bool enq_idump;
bool enq_gdump;
/*
* Set to true during an early dump phase for tdata's which are
* currently being dumped. New threads' tdata's have this initialized
* to false so that they aren't accidentally included in later dump
* phases.
*/
bool dumping;
/*
* True if profiling is active for this tdata's thread
* (thread.prof.active mallctl).
*/
bool active;
/* Temporary storage for summation during dump. */
prof_cnt_t cnt_summed;
/* Backtrace vector, used for calls to prof_backtrace(). */
void *vec[PROF_BT_MAX];
};
typedef rb_tree(prof_tdata_t) prof_tdata_tree_t;
#endif /* JEMALLOC_H_STRUCTS */
/******************************************************************************/
#ifdef JEMALLOC_H_EXTERNS
extern bool opt_prof;
extern bool opt_prof_active;
extern bool opt_prof_thread_active_init;
extern size_t opt_lg_prof_sample; /* Mean bytes between samples. */
extern ssize_t opt_lg_prof_interval; /* lg(prof_interval). */
extern bool opt_prof_gdump; /* High-water memory dumping. */
extern bool opt_prof_final; /* Final profile dumping. */
extern bool opt_prof_leak; /* Dump leak summary at exit. */
extern bool opt_prof_accum; /* Report cumulative bytes. */
extern char opt_prof_prefix[
/* Minimize memory bloat for non-prof builds. */
#ifdef JEMALLOC_PROF
PATH_MAX +
#endif
1];
/* Accessed via prof_active_[gs]et{_unlocked,}(). */
extern bool prof_active;
/* Accessed via prof_gdump_[gs]et{_unlocked,}(). */
extern bool prof_gdump_val;
/*
* Profile dump interval, measured in bytes allocated. Each arena triggers a
* profile dump when it reaches this threshold. The effect is that the
* interval between profile dumps averages prof_interval, though the actual
* interval between dumps will tend to be sporadic, and the interval will be a
* maximum of approximately (prof_interval * narenas).
*/
extern uint64_t prof_interval;
/*
* Initialized as opt_lg_prof_sample, and potentially modified during profiling
* resets.
*/
extern size_t lg_prof_sample;
void prof_alloc_rollback(tsd_t *tsd, prof_tctx_t *tctx, bool updated);
void prof_malloc_sample_object(const void *ptr, size_t usize,
prof_tctx_t *tctx);
void prof_free_sampled_object(tsd_t *tsd, size_t usize, prof_tctx_t *tctx);
void bt_init(prof_bt_t *bt, void **vec);
void prof_backtrace(prof_bt_t *bt);
prof_tctx_t *prof_lookup(tsd_t *tsd, prof_bt_t *bt);
#ifdef JEMALLOC_JET
size_t prof_tdata_count(void);
size_t prof_bt_count(void);
const prof_cnt_t *prof_cnt_all(void);
typedef int (prof_dump_open_t)(bool, const char *);
extern prof_dump_open_t *prof_dump_open;
typedef bool (prof_dump_header_t)(bool, const prof_cnt_t *);
extern prof_dump_header_t *prof_dump_header;
#endif
void prof_idump(void);
bool prof_mdump(const char *filename);
void prof_gdump(void);
prof_tdata_t *prof_tdata_init(tsd_t *tsd);
prof_tdata_t *prof_tdata_reinit(tsd_t *tsd, prof_tdata_t *tdata);
void prof_reset(tsd_t *tsd, size_t lg_sample);
void prof_tdata_cleanup(tsd_t *tsd);
const char *prof_thread_name_get(void);
bool prof_active_get(void);
bool prof_active_set(bool active);
int prof_thread_name_set(tsd_t *tsd, const char *thread_name);
bool prof_thread_active_get(void);
bool prof_thread_active_set(bool active);
bool prof_thread_active_init_get(void);
bool prof_thread_active_init_set(bool active_init);
bool prof_gdump_get(void);
bool prof_gdump_set(bool active);
void prof_boot0(void);
void prof_boot1(void);
bool prof_boot2(void);
void prof_prefork(void);
void prof_postfork_parent(void);
void prof_postfork_child(void);
void prof_sample_threshold_update(prof_tdata_t *tdata);
#endif /* JEMALLOC_H_EXTERNS */
/******************************************************************************/
#ifdef JEMALLOC_H_INLINES
#ifndef JEMALLOC_ENABLE_INLINE
bool prof_active_get_unlocked(void);
bool prof_gdump_get_unlocked(void);
prof_tdata_t *prof_tdata_get(tsd_t *tsd, bool create);
bool prof_sample_accum_update(tsd_t *tsd, size_t usize, bool commit,
prof_tdata_t **tdata_out);
prof_tctx_t *prof_alloc_prep(tsd_t *tsd, size_t usize, bool prof_active,
bool update);
prof_tctx_t *prof_tctx_get(const void *ptr);
void prof_tctx_set(const void *ptr, size_t usize, prof_tctx_t *tctx);
void prof_tctx_reset(const void *ptr, size_t usize, const void *old_ptr,
prof_tctx_t *tctx);
void prof_malloc_sample_object(const void *ptr, size_t usize,
prof_tctx_t *tctx);
void prof_malloc(const void *ptr, size_t usize, prof_tctx_t *tctx);
void prof_realloc(tsd_t *tsd, const void *ptr, size_t usize,
prof_tctx_t *tctx, bool prof_active, bool updated, const void *old_ptr,
size_t old_usize, prof_tctx_t *old_tctx);
void prof_free(tsd_t *tsd, const void *ptr, size_t usize);
#endif
#if (defined(JEMALLOC_ENABLE_INLINE) || defined(JEMALLOC_PROF_C_))
JEMALLOC_ALWAYS_INLINE bool
prof_active_get_unlocked(void)
{
/*
* Even if opt_prof is true, sampling can be temporarily disabled by
* setting prof_active to false. No locking is used when reading
* prof_active in the fast path, so there are no guarantees regarding
* how long it will take for all threads to notice state changes.
*/
return (prof_active);
}
JEMALLOC_ALWAYS_INLINE bool
prof_gdump_get_unlocked(void)
{
/*
* No locking is used when reading prof_gdump_val in the fast path, so
* there are no guarantees regarding how long it will take for all
* threads to notice state changes.
*/
return (prof_gdump_val);
}
JEMALLOC_ALWAYS_INLINE prof_tdata_t *
prof_tdata_get(tsd_t *tsd, bool create)
{
prof_tdata_t *tdata;
cassert(config_prof);
tdata = tsd_prof_tdata_get(tsd);
if (create) {
if (unlikely(tdata == NULL)) {
if (tsd_nominal(tsd)) {
tdata = prof_tdata_init(tsd);
tsd_prof_tdata_set(tsd, tdata);
}
} else if (unlikely(tdata->expired)) {
tdata = prof_tdata_reinit(tsd, tdata);
tsd_prof_tdata_set(tsd, tdata);
}
assert(tdata == NULL || tdata->attached);
}
return (tdata);
}
JEMALLOC_ALWAYS_INLINE prof_tctx_t *
prof_tctx_get(const void *ptr)
{
cassert(config_prof);
assert(ptr != NULL);
return (arena_prof_tctx_get(ptr));
}
JEMALLOC_ALWAYS_INLINE void
prof_tctx_set(const void *ptr, size_t usize, prof_tctx_t *tctx)
{
cassert(config_prof);
assert(ptr != NULL);
arena_prof_tctx_set(ptr, usize, tctx);
}
JEMALLOC_ALWAYS_INLINE void
prof_tctx_reset(const void *ptr, size_t usize, const void *old_ptr,
prof_tctx_t *old_tctx)
{
cassert(config_prof);
assert(ptr != NULL);
arena_prof_tctx_reset(ptr, usize, old_ptr, old_tctx);
}
JEMALLOC_ALWAYS_INLINE bool
prof_sample_accum_update(tsd_t *tsd, size_t usize, bool update,
prof_tdata_t **tdata_out)
{
prof_tdata_t *tdata;
cassert(config_prof);
tdata = prof_tdata_get(tsd, true);
if ((uintptr_t)tdata <= (uintptr_t)PROF_TDATA_STATE_MAX)
tdata = NULL;
if (tdata_out != NULL)
*tdata_out = tdata;
if (tdata == NULL)
return (true);
if (tdata->bytes_until_sample >= usize) {
if (update)
tdata->bytes_until_sample -= usize;
return (true);
} else {
/* Compute new sample threshold. */
if (update)
prof_sample_threshold_update(tdata);
return (!tdata->active);
}
}
JEMALLOC_ALWAYS_INLINE prof_tctx_t *
prof_alloc_prep(tsd_t *tsd, size_t usize, bool prof_active, bool update)
{
prof_tctx_t *ret;
prof_tdata_t *tdata;
prof_bt_t bt;
assert(usize == s2u(usize));
if (!prof_active || likely(prof_sample_accum_update(tsd, usize, update,
&tdata)))
ret = (prof_tctx_t *)(uintptr_t)1U;
else {
bt_init(&bt, tdata->vec);
prof_backtrace(&bt);
ret = prof_lookup(tsd, &bt);
}
return (ret);
}
JEMALLOC_ALWAYS_INLINE void
prof_malloc(const void *ptr, size_t usize, prof_tctx_t *tctx)
{
cassert(config_prof);
assert(ptr != NULL);
assert(usize == isalloc(ptr, true));
if (unlikely((uintptr_t)tctx > (uintptr_t)1U))
prof_malloc_sample_object(ptr, usize, tctx);
else
prof_tctx_set(ptr, usize, (prof_tctx_t *)(uintptr_t)1U);
}
JEMALLOC_ALWAYS_INLINE void
prof_realloc(tsd_t *tsd, const void *ptr, size_t usize, prof_tctx_t *tctx,
bool prof_active, bool updated, const void *old_ptr, size_t old_usize,
prof_tctx_t *old_tctx)
{
bool sampled, old_sampled;
cassert(config_prof);
assert(ptr != NULL || (uintptr_t)tctx <= (uintptr_t)1U);
if (prof_active && !updated && ptr != NULL) {
assert(usize == isalloc(ptr, true));
if (prof_sample_accum_update(tsd, usize, true, NULL)) {
/*
* Don't sample. The usize passed to prof_alloc_prep()
* was larger than what actually got allocated, so a
* backtrace was captured for this allocation, even
* though its actual usize was insufficient to cross the
* sample threshold.
*/
tctx = (prof_tctx_t *)(uintptr_t)1U;
}
}
sampled = ((uintptr_t)tctx > (uintptr_t)1U);
old_sampled = ((uintptr_t)old_tctx > (uintptr_t)1U);
if (unlikely(sampled))
prof_malloc_sample_object(ptr, usize, tctx);
else
prof_tctx_reset(ptr, usize, old_ptr, old_tctx);
if (unlikely(old_sampled))
prof_free_sampled_object(tsd, old_usize, old_tctx);
}
JEMALLOC_ALWAYS_INLINE void
prof_free(tsd_t *tsd, const void *ptr, size_t usize)
{
prof_tctx_t *tctx = prof_tctx_get(ptr);
cassert(config_prof);
assert(usize == isalloc(ptr, true));
if (unlikely((uintptr_t)tctx > (uintptr_t)1U))
prof_free_sampled_object(tsd, usize, tctx);
}
#endif
#endif /* JEMALLOC_H_INLINES */
/******************************************************************************/

View File

@ -1,6 +0,0 @@
#!/bin/sh
for nm in `cat $1` ; do
n=`echo ${nm} |tr ':' ' ' |awk '{print $1}'`
echo "#define je_${n} JEMALLOC_N(${n})"
done

View File

@ -1,6 +0,0 @@
#!/bin/sh
for nm in `cat $1` ; do
n=`echo ${nm} |tr ':' ' ' |awk '{print $1}'`
echo "#undef je_${n}"
done

View File

@ -1,81 +0,0 @@
/* List definitions. */
#define ql_head(a_type) \
struct { \
a_type *qlh_first; \
}
#define ql_head_initializer(a_head) {NULL}
#define ql_elm(a_type) qr(a_type)
/* List functions. */
#define ql_new(a_head) do { \
(a_head)->qlh_first = NULL; \
} while (0)
#define ql_elm_new(a_elm, a_field) qr_new((a_elm), a_field)
#define ql_first(a_head) ((a_head)->qlh_first)
#define ql_last(a_head, a_field) \
((ql_first(a_head) != NULL) \
? qr_prev(ql_first(a_head), a_field) : NULL)
#define ql_next(a_head, a_elm, a_field) \
((ql_last(a_head, a_field) != (a_elm)) \
? qr_next((a_elm), a_field) : NULL)
#define ql_prev(a_head, a_elm, a_field) \
((ql_first(a_head) != (a_elm)) ? qr_prev((a_elm), a_field) \
: NULL)
#define ql_before_insert(a_head, a_qlelm, a_elm, a_field) do { \
qr_before_insert((a_qlelm), (a_elm), a_field); \
if (ql_first(a_head) == (a_qlelm)) { \
ql_first(a_head) = (a_elm); \
} \
} while (0)
#define ql_after_insert(a_qlelm, a_elm, a_field) \
qr_after_insert((a_qlelm), (a_elm), a_field)
#define ql_head_insert(a_head, a_elm, a_field) do { \
if (ql_first(a_head) != NULL) { \
qr_before_insert(ql_first(a_head), (a_elm), a_field); \
} \
ql_first(a_head) = (a_elm); \
} while (0)
#define ql_tail_insert(a_head, a_elm, a_field) do { \
if (ql_first(a_head) != NULL) { \
qr_before_insert(ql_first(a_head), (a_elm), a_field); \
} \
ql_first(a_head) = qr_next((a_elm), a_field); \
} while (0)
#define ql_remove(a_head, a_elm, a_field) do { \
if (ql_first(a_head) == (a_elm)) { \
ql_first(a_head) = qr_next(ql_first(a_head), a_field); \
} \
if (ql_first(a_head) != (a_elm)) { \
qr_remove((a_elm), a_field); \
} else { \
ql_first(a_head) = NULL; \
} \
} while (0)
#define ql_head_remove(a_head, a_type, a_field) do { \
a_type *t = ql_first(a_head); \
ql_remove((a_head), t, a_field); \
} while (0)
#define ql_tail_remove(a_head, a_type, a_field) do { \
a_type *t = ql_last(a_head, a_field); \
ql_remove((a_head), t, a_field); \
} while (0)
#define ql_foreach(a_var, a_head, a_field) \
qr_foreach((a_var), ql_first(a_head), a_field)
#define ql_reverse_foreach(a_var, a_head, a_field) \
qr_reverse_foreach((a_var), ql_first(a_head), a_field)

View File

@ -1,69 +0,0 @@
/* Ring definitions. */
#define qr(a_type) \
struct { \
a_type *qre_next; \
a_type *qre_prev; \
}
/* Ring functions. */
#define qr_new(a_qr, a_field) do { \
(a_qr)->a_field.qre_next = (a_qr); \
(a_qr)->a_field.qre_prev = (a_qr); \
} while (0)
#define qr_next(a_qr, a_field) ((a_qr)->a_field.qre_next)
#define qr_prev(a_qr, a_field) ((a_qr)->a_field.qre_prev)
#define qr_before_insert(a_qrelm, a_qr, a_field) do { \
(a_qr)->a_field.qre_prev = (a_qrelm)->a_field.qre_prev; \
(a_qr)->a_field.qre_next = (a_qrelm); \
(a_qr)->a_field.qre_prev->a_field.qre_next = (a_qr); \
(a_qrelm)->a_field.qre_prev = (a_qr); \
} while (0)
#define qr_after_insert(a_qrelm, a_qr, a_field) \
do \
{ \
(a_qr)->a_field.qre_next = (a_qrelm)->a_field.qre_next; \
(a_qr)->a_field.qre_prev = (a_qrelm); \
(a_qr)->a_field.qre_next->a_field.qre_prev = (a_qr); \
(a_qrelm)->a_field.qre_next = (a_qr); \
} while (0)
#define qr_meld(a_qr_a, a_qr_b, a_field) do { \
void *t; \
(a_qr_a)->a_field.qre_prev->a_field.qre_next = (a_qr_b); \
(a_qr_b)->a_field.qre_prev->a_field.qre_next = (a_qr_a); \
t = (a_qr_a)->a_field.qre_prev; \
(a_qr_a)->a_field.qre_prev = (a_qr_b)->a_field.qre_prev; \
(a_qr_b)->a_field.qre_prev = t; \
} while (0)
/*
* qr_meld() and qr_split() are functionally equivalent, so there's no need to
* have two copies of the code.
*/
#define qr_split(a_qr_a, a_qr_b, a_field) \
qr_meld((a_qr_a), (a_qr_b), a_field)
#define qr_remove(a_qr, a_field) do { \
(a_qr)->a_field.qre_prev->a_field.qre_next \
= (a_qr)->a_field.qre_next; \
(a_qr)->a_field.qre_next->a_field.qre_prev \
= (a_qr)->a_field.qre_prev; \
(a_qr)->a_field.qre_next = (a_qr); \
(a_qr)->a_field.qre_prev = (a_qr); \
} while (0)
#define qr_foreach(var, a_qr, a_field) \
for ((var) = (a_qr); \
(var) != NULL; \
(var) = (((var)->a_field.qre_next != (a_qr)) \
? (var)->a_field.qre_next : NULL))
#define qr_reverse_foreach(var, a_qr, a_field) \
for ((var) = ((a_qr) != NULL) ? qr_prev(a_qr, a_field) : NULL; \
(var) != NULL; \
(var) = (((var) != (a_qr)) \
? (var)->a_field.qre_prev : NULL))

View File

@ -1,60 +0,0 @@
/******************************************************************************/
#ifdef JEMALLOC_H_TYPES
typedef struct quarantine_obj_s quarantine_obj_t;
typedef struct quarantine_s quarantine_t;
/* Default per thread quarantine size if valgrind is enabled. */
#define JEMALLOC_VALGRIND_QUARANTINE_DEFAULT (ZU(1) << 24)
#endif /* JEMALLOC_H_TYPES */
/******************************************************************************/
#ifdef JEMALLOC_H_STRUCTS
struct quarantine_obj_s {
void *ptr;
size_t usize;
};
struct quarantine_s {
size_t curbytes;
size_t curobjs;
size_t first;
#define LG_MAXOBJS_INIT 10
size_t lg_maxobjs;
quarantine_obj_t objs[1]; /* Dynamically sized ring buffer. */
};
#endif /* JEMALLOC_H_STRUCTS */
/******************************************************************************/
#ifdef JEMALLOC_H_EXTERNS
void quarantine_alloc_hook_work(tsd_t *tsd);
void quarantine(tsd_t *tsd, void *ptr);
void quarantine_cleanup(tsd_t *tsd);
#endif /* JEMALLOC_H_EXTERNS */
/******************************************************************************/
#ifdef JEMALLOC_H_INLINES
#ifndef JEMALLOC_ENABLE_INLINE
void quarantine_alloc_hook(void);
#endif
#if (defined(JEMALLOC_ENABLE_INLINE) || defined(JEMALLOC_QUARANTINE_C_))
JEMALLOC_ALWAYS_INLINE void
quarantine_alloc_hook(void)
{
tsd_t *tsd;
assert(config_fill && opt_quarantine);
tsd = tsd_fetch();
if (tsd_quarantine_get(tsd) == NULL)
quarantine_alloc_hook_work(tsd);
}
#endif
#endif /* JEMALLOC_H_INLINES */
/******************************************************************************/

Some files were not shown because too many files have changed in this diff Show More