Ad blocker interference detected!
Wikia is a free-to-use site that makes money from advertising. We have a modified experience for viewers using ad blockers
Wikia is not accessible if you’ve made further modifications. Remove the custom ad blocker rule(s) and the page will load as expected.
|libTorrent at Wikia [
rTorrent [ ]
Send and receive buffer size
The 'send_buffer_size' and 'receive_buffer_size' options can be used to adjust the socket send and receive buffer sizes. Increasing the send buffer size may help reduce disk seeking as more data is buffered each time the socket is written to. On linux you may use "cat /proc/sys/net/ipv4/tcp_wmem" to see the minimum, default and max buffer size, respectively.
Max memory usage
The amount of memory address space used for mapping chunks is limited to "ulimit -m" or 1GB. For fast downloads and/or large number of peers this may quickly be exhausted causing the client to hang while it syncs to disk. You may increase this limit with the "max_memory_usage" option.
Use of fd_set and epoll
Due to libcurl's use of fd_set for polling, rTorrent cannot at the moment move to a pure epoll implementation. Currently the epoll code uses select based polling if, and only if, libcurl is active. All non-libcurl sockets are still in epoll, but select is used on the libcurl and the epoll-socket.
Variable fd_set size
By default rTorrent uses variable sized fd_set's depending on the process sysconf(_SC_OPEN_MAX) limit. This is non-portable and is therefor possible to disable by compiling rTorrent with the --without-variable-fdset flag. Use "ulimit -n" to change the open files limit.
Large fd_set's cause a performance penalty as they must be cleared each time the client polls the sockets. When using select or epoll (until libcurl is fixed) based polling use an open files limit that is reasonably low. The widely used default of 1024 is enough for most users and 64 is minimum. Those with embeded devices or older platforms might need to set the limit much lower than the default.
The setting hash_read_ahead controls how many MB to ask the kernel to read ahead when doing hash checking. If this value is too low it might not fully utilize the possible IO, while too high it might make the kernel give up. This should depend on how much free memory is available and the kernel implementation. Usefull values would probably range from 1 to 10.
I have tweaked the settings i've used in my .rtorrent.rc, and this enables me to hash at about 10Mb/sec. Listed below are the settings I use.
- hash_read_ahead = 8
- hash_max_tries = 5
- hash_interval = 10
The real perfomance boost was the hash_interval change. This was what yielded the almost 10 fold increase in speed over the default settings.
Opening a torrent causes files to be created and resized with ftruncate (<a class="closed ticket" href="/ticket/39" title="trying to download on fat (closed)">ftruncate has problem on vfat filesystem, though</a>.) This does not actually use disk space until data is written, despite what the file sizes are reported as. Use du without the flag --apparent-size to see the real disk usage.
Multiple torrents and performance
I usually have several tens of torrents open, but the bittorrent faq says it's a bad idea <a class="ext-link" href="http://www.bittorrent.com/FAQ.html#simultaneoustorrents">http://www.bittorrent.com/FAQ.html#simultaneoustorrents</a> What should I do?
A: the problem is that if you don't have enough bandwidth to download those torrents in max speed, the performance of the bittorrent system as a whole will degrade, since you are taking up "slots" on the machines that you download from and not downloading at full speed. So if you run lots of torrents there would need to be some way to only activate downloading from one torrent at a time, so you can max out the transfer for that torrent.