systems_performance_who_s_who

Systems Performance Who’s Who

Return to People, Systems Performance, 2nd Edition, Performance Bibliography, Systems Performance, Performance DevOps, IT Bibliography, DevOps Bibliography

“ (SysPrfBGrg 2021)

Appendix E Systems Performance Who’s Who

It can be useful to know who created the technologies that we use. This is a list of who’s who in the field of systems performance, based on the technologies in this book. This was inspired by the Unix who’s who list in Libes 89]. Apologies to those who are missing or misappropriated. If you wish to dig further into the people and history, see the chapter references and the names listed in the Linux source, both the Linux repository history and the MAINTAINERS file in the Linux source [[code. The Acknowledgments section of my BPF book Gregg 19] also lists various technologies, in particular extended BPF, BCC, bpftrace, kprobes, and uprobes, and the people behind them.

John Allspaw: Capacity planning Allspaw 08].

Gene M. Amdahl: Early work on computer scalability Amdahl 67].

Jens Axboe: CFQ I/O Scheduler, fio, blktrace, io_uring.

Brenden Blanco: BCC.

Jeff Bonwick: Invented kernel slab allocation, co-invented user-level slab allocation, co-invented ZFS, kstat, first developed mpstat.

Daniel Borkmann: Co-creator and maintainer of extended BPF.

Roch Bourbonnais: Sun Microsystems systems performance expert.

Tim Bray: Authored the Bonnie disk I/O micro-benchmark, known for XML.

Bryan Cantrill: Co-created D[[Trace; Oracle ZFS Storage Appliance Analytics.

Rémy Card: Primary developer for the ext2 and ext3 file [[systems.

Nadia Yvette Chambers: Linux hugetlbfs.

Guillaume Chazarain: iotop(1) for Linux.

Adrian Cockcroft: Performance books Cockcroft 95]Cockcroft 98], Virtual Adrian (SE Toolkit).

Tim Cook: nicstat(1) for Linux, and enhancements.

Alan Cox: Linux network stack performance.

Mathieu Desnoyers: Linux Trace Toolkit (LTTng), kernel tracepoints, main author of userspace RCU.

Frank Ch. Eigler: Lead developer for SystemTap.

Richard Elling: Static performance tuning methodology.

Julia Evans: Performance and debugging documentation and tools.

Kevin Robert Elz: DNLC.

Roger Faulkner: Wrote /proc for UNIX System V, thread implementation for Solaris, and the truss(1) system call]] tracer.

Thomas Gleixner: Various Linux [[kernel performance work including hrtimers.

Sebastian Godard: sysstat package for Linux, which contains numerous performance tools including iostat(1), mpstat(1), pidstat(1), nfsiostat(1), cifsiostat(1), and an enhanced version of sar(1), sadc(8), sadf(1) (see the metrics in Appendix B).

Sasha Goldshtein: BPF tools (argdist(8), trace(8), etc.), BCC contributions.

Brendan Gregg: nicstat(1), D[[TraceToolkit, ZFS L2ARC, BPF tools (execsnoop, biosnoop, ext4slower, tcptop, etc.), BCC/bpftrace contributions, USE method, heat maps (latency, utilization, subsecond-offset), flame graphs, flame scoped, this book and previous ones Gregg 11a]Gregg 19], other perf work.

Dr. Neil Gunther: Universal Scalability Law, ternary plots for CPU utilization]], performance books Gunther 97].

Jeffrey Hollingsworth: Dynamic instrumentation Hollingsworth 94].

Van Jacobson: traceroute(8), pathchar, TCP/[[IP performance.

Raj Jain: Systems performance theory Jain 91].

Jerry Jelinek: Solaris Zones.

Bill Joy: vmstat(1), BSD virtual memory]] work, TCP/[[IP performance, FFS.

Andi Kleen: Intel performance, numerous contributions to Linux.

Christoph Lameter: SLUB allocator.

William LeFebvre: Wrote the first version of top(1), inspiring many other tools.

David Levinthal: Intel processor performance expert.

John Levon: OProfile.

Mike Loukides: First book on Unix systems performance Loukides 90], which either began or en[[couraged the tradition of resource-based analysis: CPU, memory, disk, network.

Robert Love: Linux [[kernel performance work, including for preemption.

Mary Marchini: libstapsdt: dynamic USDT for various languages.

Jim Mauro: Co-author of Solaris Performance and Tools McDougall 06a], D[[Trace: Dynamic Tracing in Oracle Solaris, Mac OS X, and FreeBSD Gregg 11].

Richard McDougall: Solaris microstate accounting, co-author of Solaris Performance and Tools McDougall 06a].

Marshall Kirk McKusick: FFS, work on BSD.

Arnaldo Carvalho de Melo: Linux perf(1) maintainer.

Barton Miller: Dynamic instrumentation Hollingsworth 94].

David S. Miller: Linux networking maintainer and SPARC maintainer. Numerous performance improvements, and support for extended BPF.

Cary Millsap: Method R.

Ingo Molnar: O(1) scheduler, completely fair scheduler, voluntary kernel preemption, ftrace, perf, and work on real-time preemption, mutexes, futexes, scheduler profiling, work queues.

Richard J. Moore: DProbes, kprobes.

Andrew Morton: fadvise, read-ahead.

Gian-Paolo D. Musumeci: System Performance Tuning, 2nd Ed. Musumeci 02].

Mike Muuss: ping(8).

Shailabh Nagar: Delay accounting, taskstats.

Rich Pettit: SE Toolkit.

Nick Piggin: Linux scheduler domains.

Bill Pijewski: Solaris vfsstat(1M), ZFS I/O throttling.

Dennis Ritchie: Unix, and its original performance features: process priorities, swapping, buffer cache, etc.

Alastair Robertson: Created bpftrace.

Steven Rostedt: Ftrace, KernelShark, real-time Linux, adaptive spinning mutexes, Linux tracing support.

Rusty Russell: Original futexes, various Linux [[kernel work.

Michael Shapiro: Co-created D[[Trace.

Aleksey Shipilev: Java performance expert.

Balbir Singh: Linux memory resource controller, delay accounting, taskstats, cgroupstats, CPU accounting.

Yonghong Song: BTF, and extended BPF and BCC work.

Alexei Starovoitov: Co-creator and maintainer of extended BPF.

Ken Thompson: Unix, and its original performance features: process priorities, swapping, buffer cache, etc.

Martin Thompson: Mechanical]] sympathy.

Linus Torvalds: The Linux [[kernel and numerous core components necessary for systems performance, Linux I/O scheduler, Git.

Arjan van de Ven: latencytop, PowerTOP, irqbalance, work on Linux scheduler profiling.

Nitsan Wakart: Java performance expert.

Tobias Waldekranz: ply (first high-level]] BPF tracer).

Dag Wieers: dstat.

Karim Yaghmour: LTT, push for tracing in Linux.

Jovi Zhangwei: ktap.

Tom Zanussi: Ftrace hist triggers.

Peter Zijlstra: Adaptive spinning mutex implementation, hardirq callbacks framework, other Linux performance work.

E.1 References

Amdahl 67] Amdahl, G., “Validity of the Single Processor Approach to Achieving Large Scale Computing Capabilities,” AFIPS, 1967.

Libes 89] Libes, D., and Ressler, S., Life with UNIX: A Guide for Everyone, Prentice Hall, 1989.

Loukides 90] Loukides, M., System Performance Tuning, O'Reilly, 1990.

Hollingsworth 94] Hollingsworth, J., Miller, B., and Cargille, J., “Dynamic Program Instrumentation for Scalable Performance Tools,” Scalable High-Performance Computing Conference (SHPCC), May 1994.

Cockcroft 95] Cockcroft, A., Sun Performance and Tuning, Prentice Hall, 1995.

Cockcroft 98] Cockcroft, A., and Pettit, R., Sun Performance and Tuning: Java and the Internet, Prentice Hall, 1998.

Musumeci 02] Musumeci, G. D., and Loukidas, M., System Performance Tuning, 2nd Edition, O'Reilly, 2002.

McDougall 06a] McDougall, R., Mauro, J., and Gregg, B., Solaris Performance and Tools: D[[Trace and MDB Techniques for Solaris 10 and Open[[Solaris, Prentice Hall, 2006.

Gunther 07] Gunther, N., Guerrilla Capacity Planning, Springer, 2007.

Allspaw 08] Allspaw, J., The Art of Capacity Planning, O'Reilly, 2008.

Gregg 11a] Gregg, B., and Mauro, J., D[[Trace: Dynamic Tracing in Oracle Solaris, Mac OS X and FreeBSD, Prentice Hall, 2011.

Gregg 19] Gregg, B., BPF Performance Tools: Linux System and Application Observability, Addison-Wesley, 2019.

Fair Use Sources

Performance: Systems performance, Systems performance bibliography, Systems Performance Outline: (Systems Performance Introduction, Systems Performance Methodologies, Systems Performance Operating Systems, Systems Performance Observability Tools, Systems Performance Applications, Systems Performance CPUs, Systems Performance Memory, Systems Performance File Systems, Systems Performance Disks, Systems Performance Network, Systems Performance Cloud Computing, Systems Performance Benchmarking, Systems Performance perf, Systems Performance Ftrace, Systems Performance BPF, Systems Performance Case Study), Accuracy, Algorithmic efficiency (Big O notation), Algorithm performance, Amdahl's Law, Android performance, Application performance engineering, Async programming, Bandwidth, Bandwidth utilization, bcc, Benchmark (SPECint and SPECfp), BPF, bpftrace, Performance bottleneck (“Hotspots”), Browser performance, C performance, C++ performance, C# performance, Cache hit, Cache performance, Capacity planning, Channel capacity, Clock rate, Clojure performance, Compiler performance (Just-in-time (JIT) compilation - Ahead-of-time compilation (AOT), Compile-time, Optimizing compiler), Compression ratio, Computer performance, Concurrency, Concurrent programming, Concurrent testing, Container performance, CPU cache, CPU cooling, CPU cycle, CPU overclocking (CPU boosting, CPU multiplier), CPU performance, CPU speed, CPU throttling (Dynamic frequency scaling - Dynamic voltage scaling - Automatic underclocking), CPU time, CPU load - CPU usage - CPU utilization, Cycles per second (Hz), CUDA (Nvidia), Data transmission time, Database performance (ACID-CAP theorem, Database sharding, Cassandra performance, Kafka performance, IBM Db2 performance, MongoDB performance, MySQL performance, Oracle Database performance, PostgreSQL performance, Spark performance, SQL Server performance), Disk I/O, Disk latency, Disk performance, Disk speed, Disk usage - Disk utilization, Distributed computing performance (Fallacies of distributed computing), DNS performance, Efficiency - Relative efficiency, Encryption performance, Energy efficiency, Environmental impact, Fast, Filesystem performance, Fortran performance, FPGA, Gbps, Global Interpreter Lock - GIL, Golang performance, GPU - GPGPU, GPU performance, Hardware performance, Hardware performance testing, Hardware stress test, Haskell performance, High availability (HA), Hit ratio, IOPS - I/O operations per second, IPC - Instructions per cycle, IPS - Instructions per second, Java performance (Java data structure performance - Java ArrayList is ALWAYS faster than LinkedList, Apache JMeter), JavaScript performance (V8 JavaScript engine performance, Node.js performance - Deno performance), JVM performance (GraalVM, HotSpot), Kubernetes performance, Kotlin performance, Lag (video games) (Frame rate - Frames per second (FPS)), Lagometer, Latency, Lazy evaluation, Linux performance, Load balancing, Load testing, Logging, macOS performance, Mainframe performance, Mbps, Memory footprint, Memory speed, Memory performance, Memory usage - Memory utilization, Micro-benchmark, Microsecond, Monitoring

Linux/UNIX commands for assessing system performance include:

  • uptime the system reliability and load average
  • top for an overall system view
  • vmstat vmstat reports information about runnable or blocked processes, memory, paging, block I/O, traps, and CPU.
  • htop interactive process viewer
  • dstat, atop helps correlate all existing resource data for processes, memory, paging, block I/O, traps, and CPU activity.
  • iftop interactive network traffic viewer per interface
  • nethogs interactive network traffic viewer per process
  • iotop interactive I/O viewer
  • iostat for storage I/O statistics
  • netstat for network statistics
  • mpstat for CPU statistics
  • tload load average graph for terminal
  • xload load average graph for X
  • /proc/loadavg text file containing load average

(Event monitoring - Event log analysis, Google Cloud's operations suite (formerly Stackdriver), htop, mpstat, macOS Activity Monitor, Nagios Core, Network monitoring, netstat-iproute2, proc filesystem (procfs)]] - ps (Unix), System monitor, sar (Unix) - systat (BSD), top - top (table of processes), vmstat), Moore’s law, Multicore - Multi-core processor, Multiprocessor, Multithreading, mutex, Network capacity, Network congestion, Network I/O, Network latency (Network delay, End-to-end delay, packet loss, ping - ping (networking utility) (Packet InterNet Groper) - traceroute - netsniff-ng, Round-trip delay (RTD) - Round-trip time (RTT)), Network performance, Network switch performance, Network usage - Network utilization, NIC performance, NVMe, NVMe performance, Observability, Operating system performance, Optimization (Donald Knuth: “Premature optimization is the root of all evil), Parallel processing, Parallel programming (Embarrassingly parallel), Perceived performance, Performance analysis (Profiling), Performance design, Performance engineer, Performance equation, Performance evaluation, Performance gains, Performance Mantras, Performance measurement (Quantifying performance, Performance metrics), Perfmon, Performance testing, Performance tuning, PowerShell performance, Power consumption - Performance per watt, Processing power, Processing speed, Productivity, Python performance (CPython performance, PyPy performance - PyPy JIT), Quality of service (QOS) performance, Refactoring, Reliability, Response time, Resource usage - Resource utilization, Router performance (Processing delay - Queuing delay), Ruby performance, Rust performance, Scala performance, Scalability, Scalability test, Server performance, Size and weight, Slow, Software performance, Software performance testing, Speed, Stress testing, SSD, SSD performance, Swift performance, Supercomputing, Tbps, Throughput, Time (Time units, Nanosecond, Millisecond, Frequency (rate), Startup time delay - Warm-up time, Execution time), TPU - Tensor processing unit, Tracing, Transistor count, TypeScript performance, Virtual memory performance (Thrashing), Volume testing, WebAssembly, Web framework performance, Web performance, Windows performance (Windows Performance Monitor). (navbar_performance)


© 1994 - 2024 Cloud Monk Losang Jinpa or Fair Use. Disclaimers

SYI LU SENG E MU CHYWE YE. NAN. WEI LA YE. WEI LA YE. SA WA HE.


systems_performance_who_s_who.txt · Last modified: 2024/04/28 03:36 (external edit)