Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Use lock in limit req #48

Closed
wants to merge 5 commits into from
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1 change: 1 addition & 0 deletions .travis.yml
Original file line number Diff line number Diff line change
Expand Up @@ -36,6 +36,7 @@ install:
- git clone https://github.com/openresty/no-pool-nginx.git ../no-pool-nginx
- git clone https://github.com/openresty/lua-resty-lrucache.git ../lua-resty-lrucache
- git clone https://github.com/openresty/lua-resty-core.git ../lua-resty-core
- git clone https://github.com/openresty/lua-resty-lock.git ../lua-resty-lock
- git clone -b v2.1-agentzh https://github.com/openresty/luajit2.git

script:
Expand Down
8 changes: 7 additions & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -285,7 +285,13 @@ Installation
This library is enabled by default in [OpenResty](https://openresty.org/) 1.11.2.2+.

If you have to install this library manually,
then ensure you are using at least OpenResty 1.11.2.1 or a custom nginx build including ngx_lua 0.10.6+. Also, You need to configure
then ensure you are using at least OpenResty 1.11.2.1 or a custom nginx build including ngx_lua 0.10.6+.

If you use a lock by passing non-nil value to `lock_shdict_name` parameter of
[resty.limit.req new](lib/resty/limit/req.md#new) method, then ensure you install
[openresty/lua-resty-lock](https://github.com/openresty/lua-resty-lock#prerequisites) too.

Also, You need to configure
the [lua_package_path](https://github.com/openresty/lua-nginx-module#lua_package_path) directive to
add the path of your `lua-resty-limit-traffic` source tree to ngx_lua's Lua module search path, as in

Expand Down
50 changes: 48 additions & 2 deletions lib/resty/limit/req.lua
Original file line number Diff line number Diff line change
Expand Up @@ -47,18 +47,27 @@ local mt = {
}


function _M.new(dict_name, rate, burst)
function _M.new(dict_name, rate, burst, lock_dict_name)
local dict = ngx_shared[dict_name]
if not dict then
return nil, "shared dict not found"
end

if lock_dict_name then
local lock_dict = ngx_shared[lock_dict_name]
if not lock_dict then
return nil, "lock shared dict not found"
end
end

assert(rate > 0 and burst >= 0)

local self = {
dict = dict,
rate = rate * 1000,
burst = burst * 1000,
dict_name = dict_name,
lock_dict_name = lock_dict_name,
}

return setmetatable(self, mt)
Expand All @@ -67,21 +76,45 @@ end

-- sees an new incoming event
-- the "commit" argument controls whether should we record the event in shm.
-- FIXME we have a (small) race-condition window between dict:get() and
-- NOTE: if lock_dict_name is not set in limit_req.new(),
-- we have a (small) race-condition window between dict:get() and
-- dict:set() across multiple nginx worker processes. The size of the
-- window is proportional to the number of workers.
function _M.incoming(self, key, commit)
local dict = self.dict
local rate = self.rate
local lock_dict_name = self.lock_dict_name
local now = ngx_now() * 1000

local excess

local lock
if lock_dict_name then
local resty_lock = require "resty.lock"

local err
lock, err = resty_lock:new(lock_dict_name)
if not lock then
return nil, err
end

local elapsed, err = lock:lock(self.dict_name)
if not elapsed then
return nil, err
end
end

-- it's important to anchor the string value for the read-only pointer
-- cdata:
local v = dict:get(key)
if v then
if type(v) ~= "string" or #v ~= rec_size then
if lock then
local ok, err = lock:unlock()
if not ok then
return nil, err
end
end
return nil, "shdict abused by other users"
end
local rec = ffi_cast(const_rec_ptr_type, v)
Expand All @@ -98,6 +131,12 @@ function _M.incoming(self, key, commit)
-- print("excess: ", excess)

if excess > self.burst then
if lock then
local ok, err = lock:unlock()
if not ok then
return nil, err
end
end
return nil, "rejected"
end

Expand All @@ -111,6 +150,13 @@ function _M.incoming(self, key, commit)
dict:set(key, ffi_str(rec_cdata, rec_size))
end

if lock then
local ok, err = lock:unlock()
if not ok then
return nil, err
end
end

-- return the delay in seconds, as well as excess
return excess / rate, excess / 1000
end
Expand Down
66 changes: 65 additions & 1 deletion lib/resty/limit/req.md
Original file line number Diff line number Diff line change
Expand Up @@ -87,6 +87,65 @@ http {
}
```

```nginx
# demonstrate the usage of the resty.limit.req module (alone!) with lock
http {
lua_shared_dict my_limit_req_store 100m;
lua_shared_dict my_limit_req_lock 12k;

server {
location / {
access_by_lua_block {
-- well, we could put the require() and new() calls in our own Lua
-- modules to save overhead. here we put them below just for
-- convenience.

local limit_req = require "resty.limit.req"

-- limit the requests under 200 req/sec with a burst of 100 req/sec,
-- that is, we delay requests under 300 req/sec and above 200
-- req/sec, and reject any requests exceeding 300 req/sec.
local lim, err = limit_req.new("my_limit_req_store", 200, 100, "my_limit_req_lock")
if not lim then
ngx.log(ngx.ERR,
"failed to instantiate a resty.limit.req object: ", err)
return ngx.exit(500)
end

-- the following call must be per-request.
-- here we use the remote (IP) address as the limiting key
local key = ngx.var.binary_remote_addr
local delay, err = lim:incoming(key, true)
if not delay then
if err == "rejected" then
return ngx.exit(503)
end
ngx.log(ngx.ERR, "failed to limit req: ", err)
return ngx.exit(500)
end

if delay >= 0.001 then
-- the 2nd return value holds the number of excess requests
-- per second for the specified key. for example, number 31
-- means the current request rate is at 231 req/sec for the
-- specified key.
local excess = err

-- the request exceeding the 200 req/sec but below 300 req/sec,
-- so we intentionally delay it here a bit to conform to the
-- 200 req/sec rate.
ngx.sleep(delay)
end
}

# content handler goes here. if it is content_by_lua, then you can
# merge the Lua code above in access_by_lua into your content_by_lua's
# Lua handler to save a little bit of CPU time.
}
}
}
```

Description
===========

Expand All @@ -108,7 +167,7 @@ Methods

new
---
**syntax:** `obj, err = class.new(shdict_name, rate, burst)`
**syntax:** `obj, err = class.new(shdict_name, rate, burst, lock_shdict_name)`

Instantiates an object of this class. The `class` value is returned by the call `require "resty.limit.req"`.

Expand All @@ -125,6 +184,11 @@ This method takes the following arguments:
Requests exceeding this hard limit
will get rejected immediately.

* `lock_shdict_name` is the opitional argument for the name of the [lua_shared_dict](https://github.com/openresty/lua-nginx-module#lua_shared_dict) shm zone
used to lock the shared dict specified by `shdict_name`. If `lock_shdict_name` is omitted or the value is `nil`, no lock is used and there is a small race-condition window
between getting and setting of the shared dict specified by `shdict_name` across multiple nginx worker processes. The size of the window is proportional to the number of workers.


On failure, this method returns `nil` and a string describing the error (like a bad `lua_shared_dict` name).

[Back to TOC](#table-of-contents)
Expand Down
Loading