Three Reasons Why You Should Not Use iPerf3 on Windows
Published Apr 18 2024 12:10 PM 39.6K Views
Microsoft

James Kehr here with the Microsoft Commercial Support – Windows Networking team. This article will explain why you should not use iPerf3 on Windows for synthetic network benchmarking and testing.  Followed by a brief explanation of why you should use ntttcp and ctsTraffic instead.

 

UPDATE (22 April 2024): Various update tags were made throughout the article based on feedback in the comments.

Reason 1 – ESnet Does Not Support Windows

 

iPerf3 is owned and maintained by an organization called ESnet (Energy Sciences Network). They do not officially support nor recommend that iPerf3 be used on Windows. Their recommendation is to use iPerf2. More on the Microsoft recommendation later.

 

Here are some direct quotes from the official ESnet iPerf3 FAQ, retrieved on 18 April 2024.

 

I’m trying to use iperf3 on Windows, but having trouble. What should I do?

 

iperf3 is not officially supported on Windows, but iperf2 is. We recommend you use iperf2.

 

UPDATE (22 April 2024): Please read the comment(s) from @rjmcmahon for details about iPerf2 vs iPerf3.

 

And from the ESnet “Obtaining iPerf3” article, retrieved on 18 April 2024.

 

Primary development for iperf3 takes place on CentOS 7 Linux, FreeBSD 11, and macOS 10.12. At this time, these are the only officially supported platforms…

 

Microsoft does not recommend using iPerf3 for a different reason.

 

Reason 2 – iPerf3 is Emulated on Windows

 

 

iPerf3 does not make Windows native API calls. It only knows how to make Linux/POSIX calls.

 

The iPerf3 community uses Cygwin as an emulation layer to get iPerf3 working on Windows. You can read more about Cygwin in their FAQ.

The iPerf3 calls are sent to Cygwin, which translates them to Windows APIs calls. Only then does the Windows network stack come into play. The iPerf3 on Windows maintainers do an excellent job of making it all work together, but, ultimately, there are potential issues with this approach.

 

Not all the iPerf3 features will work on Windows. The basic options work well, but advanced capabilities needed for certain network testing may not be available on Windows or may behave in unexpected ways.

 

Emulation tends to have a performance penalty. The emulation overhead on a latency sensitive operation, such as network testing, can result in lower than expected throughput.

 

Finally, iPerf3 uses uncommon Windows Socket (winsock) options versus native Windows applications. For generic throughput testing this is fine. For application testing the uncommon socket options will not mimic real-world Windows-native application behavior.

 

Reason 3 – You Are Probably Using an Old Version of iPerf3.

 

UPDATE (22 April 2024): iperf.fr no longer serves the old Windows iPerf3 binaries. The site now links to other sites which have actively maintained iPerf3 for Windows binaries. A big thank you to @Harvester for pointing this out in comments and to the iperf.fr team for updating their site!

 

Go search for “iPerf3 on Windows” on the web. Go ahead, open a tab, and use your search engine of choice. Which I am certain is Bing with Copilot.

 

What is the top result, and thus the most likely link you will click on? I bet the site was iperf.fr.

 

The newest version of iPerf3 for Windows on iperf.fr is 3.1.3 from 8 June 2016. That was nearly 8 years ago at the time of writing.

 

The current version of iPerf3, directly from ESnet, is 3.16. A full 15 versions of missing bug fixes, features, and changes from the version people are most likely to download.

 

This specific copy of iPerf3, from iperf.fr, includes a version of cygwin1.dll that contains a bug which limits the socket buffer to 1MB. This will cause poor performance on high speed-high latency and high bandwidth networks because iPerf3 will not be capable of putting enough data in-flight to saturate the link, resulting in inaccurate testing.

 

Where should you look for iPerf3 on Windows?

 

From ESnet’s article, “Obtaining iPerf3” they say:

 

Windows: iperf3 binaries for Windows (built with Cygwin) can be found in a variety of locations, including https://files.budman.pw/ (discussion thread).

 

What Does Microsoft Recommend

 

Microsoft maintains two synthetic network benchmarking tools: ntttcp (Windows NT Test TCP) and ctsTraffic. The newest version of ntttcp is maintained on GitHub. This is a Windows native tool which utilizes Windows networking in the same way a native Windows application does.

 

But what about Linux?

 

There is a Linux version of ntttcp, too! Details can be found on the ntttcp for Linux GitHub repo. This is a separate codebase built for Linux that is compatible with ntttcp for Windows, but it is not identical to the Windows counterpart.

 

Ntttcp allows you to perform API native synthetic network tests between Windows and Windows, Linux and Linux, and between Windows and Linux.

 

ctsTraffic is Windows-to-Windows only. Where ntttcp is more iPerf3-like, ctsTraffic has a different set of options and goals. ctsTraffic focuses on end-to-end goodput scenarios, where ntttcp and iPerf3 focus more on isolating network stack throughput.

 

How do you use ntttcp?

 

The Azure team has written a great article about basic ntttcp functionality for Windows and Linux. I do not believe in reinventing the wheel, so I will simply link you to the article.

 

https://learn.microsoft.com/en-us/azure/virtual-network/virtual-network-bandwidth-testing?tabs=windo...

 

There is a known interoperability limitation when testing between Windows and Linux. Details can be found in this ntttcp for Linux wiki article on GitHub.

 

Testing

 

I built a lab while preparing this article using two Windows Server 2022 VMs. The tests used the newest versions of iPerf3 (3.16), ntttcp (5.39), and ctsTraffic (2.0.3.3).

 

The default iPerf3 parameters are the most common configuration I see among Microsoft support customers. So, I am tuning ntttcp and ctsTraffic to better match iPerf3’s default single connection, 128KB buffer length behavior. While this is not a perfect comparison, this does make it a better comparison.

 

Single stream tests are used for targeted analyses since many applications do not perform multi-threaded transfers. Bandwidth and maximum throughput testing should be multi-threaded with large buffers, but that is a topic for a different day.

 

Don’t forget to allow the network traffic on the Windows Defender Firewall if you wish to run your own tests.

 

iPerf3

 

iPerf3 server command:

 

 

 

 

 

 

 

iperf3 -s

 

 

 

 

 

 

 

 

iPerf3 client command:

 

 

 

 

 

 

 

iperf3 -c <IP> -t 60

 

 

 

 

 

 

 

 

The average across multiple tests was about 7.5 Gbps. The top result was 8.5 Gbps, with a low of 5.26 Gbps.

 

ntttcp

 

Ntttcp server command:

 

 

 

 

 

 

 

ntttcp -r -m 1,*,<IP> -t 60

 

 

 

 

 

 

 

 

Ntttcp client command:

 

 

 

 

 

 

 

ntttcp -s -m 1,*,<IP> -l 128K -t 60

 

 

 

 

 

 

 

 

Ntttcp averaged about 12.75 Gbps across multiple tests. The top test averaged 13.5 Gbps, with a low test of 12.5 Gbps.

 

Ntttcp does something called pre-posting receives, which is unique to this tool. This reduces application wait time as part of network stack isolation, allowing for quicker than normal application responses to socket messages.

 

-r is receiver, and -s is sender.

 

-m is a mapping of values that are: <num threads>, <CPU affinity>, <Target IP>. In this test we use a single thread, no CPU affinity (*), and both -r and -s side uses the target IP address as the final value.

 

-t is test time, in seconds.

 

-l sets the buffer length. You can use K|M|G with ntttcp as shorthand for kilo-, mega-, and giga-bytes.

 

ctsTraffic

 

These commands are run in PowerShell to make reading values easier.

 

ctsTraffic server command:

 

 

 

 

 

 

 

.\ctstraffic.exe -listen:* -Buffer:"$(128KB)" -Transfer:"$(1TB)" -ServerExitLimit:1 -consoleverbosity:1 -TimeLimit:60000

 

 

 

 

 

 

 

 

ctsTraffic client command:

 

 

 

 

 

 

 

.\ctstraffic.exe -target:<IP> -Connections:1 -Buffer:"$(128KB)" -Transfer:"$(1TB)" -Iterations:1 -consoleverbosity:1 -TimeLimit:60000

 

 

 

 

 

 

 

 

The result, about 9.2 Gbps average. It is a little faster and far more consistent than iPerf3, but not quite as fast as ntttcp. The two primary reasons why ctsTraffic is slower are data integrity checks and the use of the recommended overlapped IO model. This means ctsTraffic uses a single pending receive versus pre-posting receives like ntttcp.

 

-Buffer is the buffer length (ntttp: -l).

 

-Transfer is the amount of data to send per iteration.

 

-Iterations/-ServerExitLimit is the number of times a data sets will be transferred.

 

-Connections is the number of concurrent TCP streams that will be used.

 

-TimeLimit is the number of milliseconds to run the test. The test stops even if the iteration transfer has not been completed when the time limit is reached.

 

 

Thank you for reading and I hope this helps improve your understanding of synthetic network benchmarking on Windows!

22 Comments
Copper Contributor

What about iperf3 on WSL/WSL2? Do you also not recommend it?

Copper Contributor

@Christopher Demicolithat is exactly what I was wondering too. WSLx must be possible too, no emulation.

Microsoft

Great question, @baskapteijn and @Christopher Demicoli. iPerf3 will run natively inside of WSL2. There are no emulation issues there.

 

But... the WSL2 network traffic will traverse either the NAT or mirrored network subsystem inside of Windows. These are software network layers, which means additional latency and possible performance limitations will be incurred on the WSL2 iPerf3 traffic.

 

So please keep in mind that the iPerf3 results from within WSL2 will not be the same as Windows native ntttcp/ctsTraffic results. Using iPerf3 in WSL2 would measure WSL2 network performance, not Windows native performance.

 

WSL1 is Linux syscall emulation on Windows. iPerf3 will kind of run natively, but all the syscalls are emulated. iPerf3 should work, but I have not personally tested it.

Copper Contributor

I can see why a company would like you to use their own measuring device to do performance testing. Wow this car can do 400mph according to the speedometer!

 

But seriously, with iPerf I am usually interested in finding the cause for performance issues (usually detecting the worst cases). If that is a bad emulation layer in Windows, that is what I am expecting to find. If MS would like all applications to use some other OS-calls or libraries or complete tool chains, they should make them easily available, portable and compatible. If this has been arranged already, it should not be any problem for MS to just create a pull request for iPerf3 and fix the problem.

 

This would also fix the problem with all the other applications that are using the same network libraries. I would assume MS have fixed a native version of Git by now, that is not relying on MinGW. But I am sure a lot of other software still do. So for these iPerf3 will tell the truth. While ctsTraffic etc. will give you some theoretical number.

 

Microsoft

@Brodde the source code for ntttcp (Windows and Linux) and ctsTraffic is freely available online in GitHub. The links are in the article. You are welcome to scrutinize the code, build your own executables, and test whether the Microsoft numbers and methodologies are accurate or not.

 

Also, all synthetic network benchmark tools measure theoretical performance. Even iPerf3. These tools measure network stack isolated performance by removing reliance on things like storage, processing time, and other overheads associated with production applications and processes.

Copper Contributor

@JamesKehr Though I think it is good that some of Microsofts source code is released as FOSS, I am not going to review the code for 2 (or 3) different applications just to find out that they are not measuring the actual performance of the system. I would not go through the design specs., blueprints and QA-documents for the speedometer of every car brand, even if it was available. I would just use a third party tool, that I can use with every car. 

 

If all of the 3 MS test applications were only one single set of source code that was portable. I would perhaps be more inclined to review and use it. If it had the option to test all the network libraries that are possible to use in Windows, that would perhaps sell it even better.  A tool like that could even help with black box network analysis of closed source applications.

 

Until then I will continue recommending iPerf3 or even a simple "time scp..." to do network performance testing. At least I am (hopefully) getting the OS worst case. Not a misleading high result.

 

 

Microsoft

@Brodde You are, of course, welcome to recommend whatever you prefer. Just as Microsoft is welcome to recommend whatever we prefer.

 

There are other ways you can test without viewing source code. It's not like Microsoft can quantum teleport packets between Windows and Linux (could you even imagine how much wealthier we'd be if we could?).

 

Setup a port mirror (SPAN in Cisco terms), switch logging, and/or a sniffer appliance/tool of your choice between two systems of your choice. Run an ntttcp trace between said systems.  Measure the throughput on the wire and see how it measures against reported results. Or use ntttcp on Linux plus whatever trusted tools and metrics in Linux you prefer to measure generic throughput. It would be rather trivial to certify that the results are accurate since the traffic must traverse mediums that Windows and Microsoft does not control. If you do find a reporting error, because errors do happen, please submit an issue on GitHub, let me know, and I will engage people who can address any issues.

 

But, please, do not make unfounded accusations here. 

Copper Contributor

@JamesKehr I would never make unfounded accusations.

 

I have done network testing for decades. And iPerf is just one of the tools in the toolbox. I think you are completely missing the point. I always try to use something that is acting like the applications I am going to run on all the systems in the network to get a fair idea of the network performance. If I was just interested in saturating the network a simple flood ping would do the trick, but that does not tell me much. Just like a specially crafted piece of software from the OS vendor will not tell me much. If I was going to create a piece of software, that runs on all the nodes, it would most likely not match anywhere near the performance from ntttcp. And if my applications happens to runs worse on a windows system than on a Mac, fine, I can live with that. But I would like to know that, so I do not spend time looking for bottlenecks in the network. If iPerf3 used MSYS2 it would perhaps perform better, so perhaps MS should make a pull request.

 

I guess it depends on what the goal is. If my goal was to show off the theoretical throughput maximum of Windows, I would probably use ntttp. (Or ping flood).

Copper Contributor

FYI, the iperf.fr webpage has been fixed, so reason 2 can be safely removed :)

Copper Contributor

There is some confusion here. First, some information on iperf 2 which is not the predecessor to iperf 3. Iperf 2 is the continuation of the original iperf that was abandoned in 2010 or so. We in Wi-Fi chips took over maintenance in 2014. It's mostly been rewritten since then though adhering to the major design patterns, e.g. threads and separation of user i/o from traffic i/o. More about the differences here: https://iperf2.sourceforge.io/IperfCompare.html

To your reasons (which don't necessarily apply to iperf 2.)  Also note, that tools really need broad statistical verifications before they can be used broadly and across platforms. This is non trivial and goes beyond code writing.

Reason 1: There really is minimal support by ESNet for any release. They are a research group. The most testing & support, including statistical verification from release to release, comes with iperf 2 because Wi-Fi mfgs have something like 14B devices in the field. We also do test 100G+ networks as chips (NICs and merchant forwarding silicon) support that too.

Reason 2: MingW support does allow for posix calls. Some key things are:

  • Some use iperf for network measurements
  • Some use iperf to characterize applications that use TCP/UDP or end to end which includes host stacks and app level writes/reads.

The CPU, and hence emulation, should not be the bottleneck for network measurement. If it is, then iperf is not a network test tool anymore. It becomes a syscall limited tool. Second, interactions with the stack can impact interoperability but that usually shows up as latency of syscalls that get seen by the network message passing. Again, not a network issue, but a stack/app issue. So, from a network capacity test perspective, the use of posix emulation shouldn't matter. Latency measurement or CPU constrained systems need more analysis and posix emulation could have impacts.


Reason 3: Yes, iperf.fr is woefully out of date and spreads misinformation. A suggestion is to go the sites for authoritative information. There is much misinformation found on the net, lots and lots of misunderstandings.

Also, your article conflates two different metrics, throughput and latency. Iperf 2 has had a focus on latency, including one way delay, traffic patterns & measurements for years now. Most don't understand latency and those impacts on user experience. More capacity, or a capacity measurement, shouldn't be confused with latency or responsiveness.

Finally, IPerf 2 https://sourceforge.net/projects/iperf2/ is different from the iperf3 found at https://github.com/esnet/iperf Each can be used to measure network performance, however, they DO NOT interoperate. They are completely independent implementations with different strengths, different options, and different capabilities. Both are under active development.

Thanks for starting the discussion.

 

PS. A localhost test can provide CPU & syscall measurements, e.g. iperf -c localhost -i 1 -e and iperf -s 

Microsoft

@Harvester Thank you for pointing out the change to iperf.fr! I added an update to Reason 2 noting this change.

 

@rjmcmahon Thank you for the details about iPerf2! I did not include iPerf2 in the discussion because I am not familiar with iPerf2, and I do not see a lot of Microsoft customers using it at the moment. I added a note in the article pointing out your comment for anyone who wants to know more about iPerf2.

 

The Microsoft networking team has been reading comments here and various other places on the Internet. One last thank you goes out to everyone who has participated in the discussion! We greatly appreciate the feedback and hope to have some updates/announcements once work on Windows Server 2025 is done.

 

Copper Contributor

@JamesKehrYeah, it's a common misconception that a bigger number means better. In iperf's case, the number indicates different development teams with different goals. Iperf 3 is a misnomer as it's a not a follow on to the original iperf.  Iperf 2 is based from the original code.

I'd suggest trying iperf 2 on Windows. We're doing some 2.2.1 work around 64b binaries that should be released soon. Some engineers at Meta are helping. That might be worth evaluating when it's released within a few weeks.

Also, something you might want to mention is that only iperf 2 has statistical versification. We test across lots of test beds and multiple days before doing a release. The tool needs to come up with the same numbers otherwise it's broken. We use control charts to make sure the tool performs the same. My guess is most other tools don't do this level of qualifications of the tool itself.

Finally, it might be good if Windows systems added support for latency testing. This is non trivial but very important. All the features can be found in the man page. You can reach out to me on sourceforge if Windows developers want to help take this on.

Microsoft

Thanks again, @rjmcmahon ! In true Microsoft fashion, we have a separate tool just for latency called Latte. 

 

https://github.com/microsoft/latte

 

https://learn.microsoft.com/en-us/azure/virtual-network/virtual-network-test-latency?tabs=windows

 

 

 

Copper Contributor
@JamesKehr Hi James, It's not in the interests for a platform vendor to supply the tool to test their platform, particularly if that tool is platform or os specific.  It has to be broadly accepted as reliable, open & transparent to code changes, and vendor independent. Iperf 2 goes back like 15 years or so in our use cases covering at least ten kernel versions and multiple platforms, including things like set top boxes. We have to cover most every platform that uses a Wi-Fi chip which includes Windows but much more.

A suggestion is to inform MSFT engineers to support a tool like iperf 2 vs roll their own, at least for external users. Internal tools are a different matter. External, industry standard, tools really need to have a priority on vendor neutrality and industry acceptance. Your blog is a bit misguided in that context, suggesting not to use iperf 3 for a vendor specific tool isn't a good approach by my opinion. Better is to use an industry standard tool like iperf 2 which is based on the original iperf code started in around 2004 or so and actively maintained 20+ years later.

 

Microsoft

@rjmcmahon  Please keep in mind that I am in support, so the article is support slanted.

 

From a Microsoft supportability perspective, we need internally trusted tools like ntttcp and ctsTraffic that we (Microsoft support) can use for troubleshooting purposes when addressing networking issues related to Windows and the Microsoft ecosystem.

 

The tools mentioned in the article are built by people on the Windows core networking team and are used to validate the operating system and certain interoperability scenarios between Windows and other platforms. They are built by people who know networking and the Windows network stack inside and out. They are made available for public use by Microsoft so we (Microsoft support) and our customers can test knowing that the tool is built using our own internal best practices and recommendations. If something doesn’t work with our tools, then we have a good idea something is seriously broken.

 

This does not remove nor diminish the need for third-party (non-Microsoft) based tools. Native tools like iPerf2 are welcome and encouraged.

 

These tools serve several important functions. They can confirm Microsoft's own results and alleviate concerns about cheating. Something that has been mentioned by more than one person in this comment section and across the Internet. Let's face it, no one will ever 100% trust a massive company like Microsoft.

 

But, most importantly, people need tools that can validate their own needs and scenarios, and the Microsoft tools may not provide those capabilities. Our tools are built for our needs, but our needs are not everyone’s needs.

 

While iPerf3 is an excellent tool for the platforms it natively supports, and in certain capacities with community supported versions for non-ESnet supported platforms, the fact that it is not purpose built for Windows diminishes its usefulness in an official troubleshooting capacity for the reasons outlined in the article.

 

Does that help clarify things the article’s stance?

Copper Contributor

Sure, for MSFT only then MSFT tools are fine. Most networks now are a mix requiring tools that run on all platforms and systems. Iperf3 is designed for capacity tests and does say use iperf2. Yet most still use Iperf3 because 3 is larger than 2 and they don't understand the technical differences. Your post begins to speak to that per native vs posix socket calls and options. 

Copper Contributor

Hi James and everyone else in this thread – Bruce Mah here, one of the developers of iperf3 at ESnet. We’ve heard from a lot of people about this blog post and been following the back and forth. Thank you for all the updates to the original post, to iperf.fr for removing the old Windows iPerf3 binaries, and to @rjmcmahon for jumping in to clarify the iperf2/iperf3 differences and the “bigger number ≠ better” issue. I have a bit more context to add to this thread. 

 

A little background on ESnet and iperf3
ESnet (www.es.net) provides scientific networking services to support the U.S. Department of Energy and its national laboratories, user facilities, and scientific instruments complex. We developed iperf3 as a rewrite of iperf2 in order to be able to test the end-to-end performance of networks doing large transfers of scientific data. The primary consumer of iperf3 is the perfSONAR measurement system (https://www.perfsonar.net/), which is widely used in the research and education (R&E) networking community. iperf3 is of course also usable as a standalone tool, which is one of the reasons it’s been released separately on GitHub. Work on iperf3 started in 2009 (the first commit was an import of the iperf-2.0 sources), with the first public release in 2013.

 

The commit history (and the original iperf2 project maintainer) will confirm that iperf3 was intended essentially as an iperf2 replacement. Thus there was a time during which iperf2 was basically abandonware. We’re happy to see that @rjmcmahon has assumed the maintainership of this code base and is actively developing for it.

 

Linux vs. other operating systems

Most of the high-performance networking that we see in the R&E networking space comes from Linux hosts, so it was natural that this is the main supported platform. Supporting iperf3 on FreeBSD and macOS has ensured some level of cross-platform support, at least to the extent of other UNIX and UNIX-like systems. While we have had many requests to make iperf3 work under Windows, we didn’t have the developer skills or resources to support that — and we still don’t. The fact that iperf3 works on Windows at all is a result of code contributions from the community, which we gratefully acknowledge.

 

There are many facets to end-to-end application network performance. These include of course routers, switches, NICs, and network links, but also the end host operating system, runtime libraries, and the application itself. To that extent iperf3 does characterize the performance of a certain set of applications designed for UNIX but trying to run (with some emulation or adaptation) in a Windows environment. We completely agree that this may not provide the highest throughput numbers on Windows, compared to a program that uses native APIs.

 

Iperf3 and Windows

We’re happy to see that iperf.fr has removed the old, obsolete binaries from their Web site. This is a problem that can affect any open-source project, not just iperf3.

 

As mentioned earlier, we’ve generally accepted patches for iperf3 to run on Windows (or other not-officially-supported operating systems such as Android, iOS, or various commercial UNIXes). These changes have allowed Windows hosts to run iperf3 tests (apparently with sub-optimal performance) against any other instance of iperf3, regardless of operating system.  

 

If there’s interest on the part of Microsoft in making a more-Windows-friendly version of iperf3, we’d welcome a conversation on that topic. Feel free to reach out to me anytime (https://www.es.net/bruce-mah/). 

 

Bruce.

 

Copper Contributor

Hi Bruce,

 

Thanks for your informative post. I'm skeptical that posix emulation is the bottleneck for these capacity tests. One can run iperf3 -c localhost and iperf3 -s to check. CPUs tend to run sub nanosecond and from internal cache so this should be fine and off board network i/o is going to be many orders of magnitude slower.

I also don't see a technical reason to go to MSFT tools here. We've been using iperf 2 on Windows systems over a decade now for our Wi-Fi related testing. So have our industry partners. We see no issue, at least with iperf2, from a capacity test perspective. 

I do agree the MSFT could help better support industry standard tools. The gist of this blog post is not to use industry standard tools. I'd be skeptical as a customer or user to follow this suggestion.


Copper Contributor

Hi Bob--

 

I'd be the first to admit that I don't understand the UNIX / POSIX type runtime environments (e.g. WSL/WSL2/Cygwin) under Windows well enough to have an educated opinion on where the bottlenecks might be. (I've been learning from reading this blog post and comments however, thanks again to everyone who's posted.)

 

I think that using Microsoft's proprietary tools can be useful in certain scenarios, such as an all-Windows environment, perhaps for doing some kind of specialized measurements or troubleshooting that are specific to that particular platform. But as you've pointed out, tools like iperf2 and iperf3, using industry standard APIs, are needed for the cross-platform environments that many users live in, as well as being able to compare results across different OS platforms.

 

Bruce.

Copper Contributor

Here's an iperf 2 run using local host which should give a measure of CPU performance

C:\Users\rebec\Downloads>iperf-2.2.n-win64.exe -c localhost -e -i 1 -P 1
------------------------------------------------------------
Client connecting to localhost, TCP port 5001 with pid 13496 (1/0 flows/load)
Write buffer size: 131072 Byte
TOS defaults to 0x0 (dscp=0,ecn=0) (Nagle on)
TCP window size: 512 KByte (default)
------------------------------------------------------------
[ 1] local 127.0.0.1 port 62121 connected with 127.0.0.1 port 5001 on 2024-04-29 09:10:36.443634 (Pacific Daylight Time)
[ ID] Interval Transfer Bandwidth Write/Err
[ 1] 0.00-1.00 sec 5.45 GBytes 46.8 Gbits/sec 44634/0
[ 1] 1.00-2.00 sec 5.29 GBytes 45.4 Gbits/sec 43315/0
[ 1] 2.00-3.00 sec 5.25 GBytes 45.1 Gbits/sec 43031/0
[ 1] 3.00-4.00 sec 5.32 GBytes 45.7 Gbits/sec 43579/0
[ 1] 4.00-5.00 sec 5.30 GBytes 45.5 Gbits/sec 43387/0
[ 1] 5.00-6.00 sec 5.37 GBytes 46.2 Gbits/sec 44023/0
[ 1] 6.00-7.00 sec 5.30 GBytes 45.5 Gbits/sec 43405/0
[ 1] 7.00-8.00 sec 5.33 GBytes 45.8 Gbits/sec 43662/0
[ 1] 8.00-9.00 sec 5.40 GBytes 46.4 Gbits/sec 44204/0
[ 1] 9.00-10.00 sec 5.19 GBytes 44.6 Gbits/sec 42487/0
[ 1] 0.00-10.01 sec 53.2 GBytes 45.6 Gbits/sec 435728/0

 

C:\Users\rebec\Downloads>iperf-2.2.n-win64.exe -c localhost -e -i 1 -P 2
------------------------------------------------------------
Client connecting to localhost, TCP port 5001 with pid 18136 (2/0 flows/load)
Write buffer size: 131072 Byte
TOS defaults to 0x0 (dscp=0,ecn=0) (Nagle on)
TCP window size: 512 KByte (default)
------------------------------------------------------------
[ 2] local 127.0.0.1 port 62166 connected with 127.0.0.1 port 5001 on 2024-04-29 09:11:05.291509 (Pacific Daylight Time)
[ 1] local 127.0.0.1 port 62167 connected with 127.0.0.1 port 5001 on 2024-04-29 09:11:05.291515 (Pacific Daylight Time)
[ ID] Interval Transfer Bandwidth Write/Err
[ 1] 0.00-1.00 sec 5.03 GBytes 43.2 Gbits/sec 41242/0
[ 2] 0.00-1.00 sec 5.19 GBytes 44.6 Gbits/sec 42547/0
[SUM-cnt] Interval Transfer Bandwidth Write/Err
[SUM-2] 0.00-1.00 sec 10.2 GBytes 87.9 Gbits/sec 83789/0
[ 1] 1.00-2.00 sec 5.20 GBytes 44.7 Gbits/sec 42631/0
[ 2] 1.00-2.00 sec 5.31 GBytes 45.6 Gbits/sec 43476/0
[SUM-2] 1.00-2.00 sec 10.5 GBytes 90.3 Gbits/sec 86107/0
[ 1] 2.00-3.00 sec 5.16 GBytes 44.4 Gbits/sec 42298/0
[ 2] 2.00-3.00 sec 5.30 GBytes 45.5 Gbits/sec 43429/0
[SUM-2] 2.00-3.00 sec 10.5 GBytes 89.9 Gbits/sec 85727/0
[ 1] 3.00-4.00 sec 5.05 GBytes 43.4 Gbits/sec 41407/0
[ 2] 3.00-4.00 sec 5.19 GBytes 44.6 Gbits/sec 42548/0
[SUM-2] 3.00-4.00 sec 10.2 GBytes 88.0 Gbits/sec 83955/0
[ 1] 4.00-5.00 sec 5.07 GBytes 43.5 Gbits/sec 41494/0
[ 2] 4.00-5.00 sec 5.19 GBytes 44.6 Gbits/sec 42495/0
[SUM-2] 4.00-5.00 sec 10.3 GBytes 88.1 Gbits/sec 83989/0
[ 1] 5.00-6.00 sec 5.16 GBytes 44.3 Gbits/sec 42248/0
[ 2] 5.00-6.00 sec 5.30 GBytes 45.6 Gbits/sec 43456/0
[SUM-2] 5.00-6.00 sec 10.5 GBytes 89.9 Gbits/sec 85704/0
[ 1] 6.00-7.00 sec 5.05 GBytes 43.4 Gbits/sec 41345/0
[ 2] 6.00-7.00 sec 5.18 GBytes 44.5 Gbits/sec 42428/0
[SUM-2] 6.00-7.00 sec 10.2 GBytes 87.8 Gbits/sec 83773/0
[ 1] 7.00-8.00 sec 4.65 GBytes 39.9 Gbits/sec 38066/0
[ 2] 7.00-8.00 sec 4.32 GBytes 37.1 Gbits/sec 35383/0
[SUM-2] 7.00-8.00 sec 8.97 GBytes 77.0 Gbits/sec 73449/0
[ 1] 8.00-9.00 sec 4.78 GBytes 41.1 Gbits/sec 39154/0
[ 2] 8.00-9.00 sec 4.80 GBytes 41.2 Gbits/sec 39315/0
[SUM-2] 8.00-9.00 sec 9.58 GBytes 82.3 Gbits/sec 78469/0
[ 1] 9.00-10.00 sec 5.10 GBytes 43.8 Gbits/sec 41818/0
[ 2] 9.00-10.00 sec 5.24 GBytes 45.0 Gbits/sec 42946/0
[SUM-2] 9.00-10.00 sec 10.3 GBytes 88.9 Gbits/sec 84764/0
[ 1] 0.00-10.03 sec 50.3 GBytes 43.0 Gbits/sec 411704/0
[ 2] 0.00-10.04 sec 51.0 GBytes 43.7 Gbits/sec 418025/0
[SUM-2] 0.00-10.00 sec 101 GBytes 87.0 Gbits/sec 829729/0

 

Copper Contributor

For completeness.  It looks like it's about 40K writes per second on the windows machine.  About 85K writes per second with a linux machine. And about 135K writes per second with a Mac M3. These should scale, at least somewhat, with using more cores.

Note: A windows native tool that measured writes per second could be useful to compare against posix emulations.

Linux:

[root@ctrl1fc35 ~]# iperf -c localhost -i 1 -e
------------------------------------------------------------
Client connecting to localhost, TCP port 5001 with pid 2874174 (1/0 flows/load)
Write buffer size: 131072 Byte
TCP congestion control using reno
TOS defaults to 0x0 (dscp=0,ecn=0) (Nagle on)
TCP window size: 2.50 MByte (default)
------------------------------------------------------------
[ 1] local 127.0.0.1%lo port 46056 connected with 127.0.0.1 port 5001 (icwnd/mss/irtt=319/32741/8) (ct=0.04 ms) on 2024-04-29 11:00:21.716476 (PDT)
[ ID] Interval Transfer Bandwidth Write/Err Rtry InF(pkts)/Cwnd(pkts)/RTT(var) NetPwr
[ 1] 0.00-1.00 sec 10.7 GBytes 92.0 Gbits/sec 87776/0 0 63K(1)/2685K(42)/12(3) us 958747995
[ 1] 1.00-2.00 sec 10.5 GBytes 89.9 Gbits/sec 85760/0 0 63K(1)/2685K(42)/11(2) us 1021884975
[ 1] 2.00-3.00 sec 10.7 GBytes 91.8 Gbits/sec 87586/0 0 63K(1)/4028K(63)/11(2) us 1043642927
[ 1] 3.00-4.00 sec 10.8 GBytes 92.6 Gbits/sec 88356/0 0 63K(1)/4028K(63)/12(3) us 965083136
[ 1] 4.00-5.00 sec 10.3 GBytes 88.3 Gbits/sec 84225/0 0 63K(1)/4028K(63)/11(2) us 1003594473
[ 1] 5.00-6.00 sec 10.2 GBytes 87.8 Gbits/sec 83699/0 0 63K(1)/4028K(63)/12(2) us 914216277
[ 1] 6.00-7.00 sec 10.3 GBytes 88.2 Gbits/sec 84119/0 0 63K(1)/4028K(63)/12(2) us 918803797
[ 1] 7.00-8.00 sec 10.3 GBytes 88.1 Gbits/sec 83994/0 0 191K(3)/4028K(63)/12(2) us 917438464
[ 1] 8.00-9.00 sec 10.1 GBytes 87.0 Gbits/sec 82932/0 0 63K(1)/4028K(63)/15(6) us 724670874
[ 1] 9.00-10.00 sec 10.1 GBytes 86.7 Gbits/sec 82638/0 0 63K(1)/4028K(63)/13(3) us 833194457
[ 1] 0.00-10.02 sec 104 GBytes 89.1 Gbits/sec 851086/0 0 0K(0)/4028K(63)/2034(4046) us 5475507

 

Mac OS X M3 
bm932125@D2G6XFM2XC iperf2-code % src/iperf -c localhost -i 1 -e
------------------------------------------------------------
Client connecting to localhost, TCP port 5001 with pid 66144 (1/0 flows/load)
Write buffer size: 131072 Byte
TOS defaults to 0x0 (dscp=0,ecn=0) (Nagle on)
TCP window size: 128 KByte (default)
------------------------------------------------------------
[ 1] local 127.0.0.1%lo0 port 59752 connected with 127.0.0.1 port 5001 (icwnd/mss/irtt=127/16332/2000) (ct=2.35 ms) on 2024-04-29 10:57:22.64917 (PDT)
[ ID] Interval Transfer Bandwidth Write/Err Rtry Cwnd(pkts)/RTT(var) NetPwr
[ 1] 0.00-1.00 sec 16.0 GBytes 138 Gbits/sec 131217/0 0 NA/1000(0)us 17198744
[ 1] 1.00-2.00 sec 16.4 GBytes 141 Gbits/sec 134738/0 0 NA/1000(0)us 17660379
[ 1] 2.00-3.00 sec 16.5 GBytes 142 Gbits/sec 135287/0 0 NA/1000(0)us 17732338
[ 1] 3.00-4.00 sec 16.5 GBytes 142 Gbits/sec 135019/0 0 NA/1000(0)us 17697210
[ 1] 4.00-5.00 sec 16.5 GBytes 141 Gbits/sec 134840/0 0 NA/1000(0)us 17673748
[ 1] 5.00-6.00 sec 16.4 GBytes 141 Gbits/sec 134527/0 0 NA/1000(0)us 17632723
[ 1] 6.00-7.00 sec 16.4 GBytes 141 Gbits/sec 134149/0 0 NA/1000(0)us 17583178
[ 1] 7.00-8.00 sec 16.4 GBytes 141 Gbits/sec 134465/0 0 NA/1000(0)us 17624596
[ 1] 8.00-9.00 sec 16.4 GBytes 141 Gbits/sec 134528/0 0 NA/1000(0)us 17632854
[ 1] 9.00-10.00 sec 16.0 GBytes 137 Gbits/sec 130760/0 0 NA/1000(0)us 17138975
[ 1] 0.00-10.01 sec 164 GBytes 140 Gbits/sec 1339533/0 0 NA/13000(0)us 1348923

Copper Contributor

For casual testing, you can simply use OpenSpeedTest. This lightweight tool runs on a tiny Nginx server, eliminating the need for additional software installation. It effectively handles network speeds up to 2-3 Gbps on most systems from 2014 or newer, reaching 10 Gbps+ for those built in 2019 or later. M1 Mac Minis and newer systems can even achieve 20 Gbps+ speeds. However, for requirements exceeding 20 Gbps, a browser-based test like OpenSpeedTest might not be the most suitable option. The good news is, it can cover most of use cases without requiring any additional software on the client device. Just utilize your web browser! For maximum performance, however, avoid Firefox and opt for Chrome, Edge, or other Chromium browsers.

 

OpenSpeedTest Github :: https://github.com/openspeedtest/Speed-Test 

OpenSpeedTest Docker :: https://hub.docker.com/r/openspeedtest/latest

 

OpenSpeedTest-Server is available Microsoft Store, Apple App Store etc. :: https://go.openspeedtest.com/Server

Co-Authors
Version history
Last update:
‎Apr 22 2024 10:44 AM
Updated by: