Compare commits

...

116 Commits
0.10.2 ... main

Author SHA1 Message Date
Grace Stok
d50285075d
docs: name change in contributor list (#1709) 2025-04-04 16:30:10 -04:00
allcontributors[bot]
60aac16ff0
docs: add benjamb as a contributor for code (#1707)
* docs: update README.md [skip ci]

* docs: update .all-contributorsrc [skip ci]

---------

Co-authored-by: allcontributors[bot] <46447321+allcontributors[bot]@users.noreply.github.com>
2025-03-30 12:28:38 -04:00
Ben Brown
3330c1f1e7
fix: Disable mouse capture when disable_click is set (#1706) 2025-03-30 12:28:21 -04:00
Clement Tsang
f8b8a21748
docs: use local link in docs to reference itself & missing CPU config (#1704)
* docs: use local link in docs to reference itself

* also fix missing cpu config docs
2025-03-28 04:22:40 +00:00
Clement Tsang
dd995c170f
docs: bump docs requirements.txt (#1702) 2025-03-28 04:05:52 +00:00
Clement Tsang
b1f969880e
other: remove comment about skipping timeseries in basic (#1700)
Turns out I already did this. Nice.
2025-03-28 03:56:24 +00:00
Ada Ahmed
37a546ab0f
other: update amdgpu marketing names and trim excess keywords (#1692) 2025-03-21 20:04:31 -04:00
Clement Tsang
a90490fe83
docs: update README and autocomplete docs (#1699)
* formatting

* update autocomplete
2025-03-21 09:21:04 +00:00
Clement Tsang
40087d1203
docs: image tags are hard (#1698)
* docs: image tags are hard

* come on what

* ??

* img
2025-03-21 05:01:30 -04:00
Clement Tsang
95ac255557
docs: update thanks again (#1697)
It's thankin' time
2025-03-21 04:57:38 -04:00
Clement Tsang
235fd837a3
docs: update thanks (#1696) 2025-03-21 04:36:51 -04:00
Clement Tsang
2afd32fbb0
docs: update font-related troubleshooting (#1695) 2025-03-21 02:56:19 -04:00
Clement Tsang
5e95f8fac8
deps: bump lock deps as of 2025-03-19 (#1691) 2025-03-20 02:29:01 +00:00
Clement Tsang
769372ead6
deps: bump root dependencies as of 2025-03-19 (#1690)
* deps: bump root deps as of 2025-03-19

Tried this with cursor, worked alright after some nudging.

* looks like it missed a few

* nvm it missed a lot
2025-03-20 01:03:20 +00:00
Julius Enriquez
a608a1bf83
docs: Add Alpine Linux installation instructions (#1689) 2025-03-10 10:23:11 -04:00
allcontributors[bot]
a02257e6af
docs: add win8linux as a contributor for doc (#1688)
* docs: update README.md [skip ci]

* docs: update .all-contributorsrc [skip ci]

---------

Co-authored-by: allcontributors[bot] <46447321+allcontributors[bot]@users.noreply.github.com>
2025-03-10 02:45:05 -04:00
Julius Enriquez
5442529888
docs: Add openSUSE installation instructions (#1687) 2025-03-10 02:44:55 -04:00
Clement Tsang
3113c24e37
other: allow for hyphen versions of arguments to be used (#1686)
This PR allows both args like `--autohide-time` _and_ `--autohide_time` to work.
2025-03-06 04:59:50 +00:00
Clement Tsang
05e1adc0b4
docs: Update CHANGELOG.md for #1683 (#1685) 2025-02-28 04:07:19 -05:00
allcontributors[bot]
40cc08d6d9
docs: add mtoohey31 as a contributor for code (#1684)
* docs: update README.md [skip ci]

* docs: update .all-contributorsrc [skip ci]

---------

Co-authored-by: allcontributors[bot] <46447321+allcontributors[bot]@users.noreply.github.com>
2025-02-28 04:05:54 -05:00
Matthew Toohey
f8cfd962f3
fix: prevent graph lines from showing up behind legend (#1683)
* fix: prevent graph lines from showing up behind legend

* use cell_mut instead
2025-02-28 09:00:08 +00:00
Clement Tsang
d2177ed022
refactor: don't duplicate AMD GPU temperature retrieval (#1682)
* some file/struct renaming

* refactor: don't get AMD gpu temperatures twice
2025-02-23 06:21:12 +00:00
Clement Tsang
f7d070f944
refactor: somewhat migrate to Rust 2024 edition (#1681)
* refactor: try bumping to rust 2024 edition

* now run nightly fmt

* fix some macos changes

* only apply a few of these settings
2025-02-22 02:12:08 +00:00
Clement Tsang
9999a4824a
docs: update documentation around how grouping processes and tree mode are incompatible (#1679)
* update default config

* use info, not note
2025-02-21 11:04:08 +00:00
Clement Tsang
393c24d303
docs: bump mkdocs-material to 9.6.5 (#1680) 2025-02-21 05:49:13 -05:00
Clement Tsang
d63ca07cae
refactor: clean up some file structure, process code, and terminal cleanup (#1676)
* move widgets

* reduce allocations needed

* ah

* more possible optimizations around reducing allocs

* some fixes

* I forgot to clear the buffer oops

* missing

* only run terminal cleanup after certain point
2025-02-15 02:32:09 -05:00
allcontributors[bot]
2b5441ca8b
docs: add SigmaSquadron as a contributor for doc (#1675)
* docs: update README.md [skip ci]

* docs: update .all-contributorsrc [skip ci]

---------

Co-authored-by: allcontributors[bot] <46447321+allcontributors[bot]@users.noreply.github.com>
2025-02-13 13:25:57 -05:00
Fernando Rodrigues
7f98651541
docs: update Nix install instructions (#1674)
This commit contains the following changes:
1. Corrects the source of the package in the Nix instructions: `bottom` isn't coming from `nix-community`, but Nixpkgs itself.
2. Updates the Nix install instructions to use the new Nix CLI.
3. The original link was pointing to a home-manager module for bottom. We now include instructions on how to enable said module.
2025-02-13 13:25:45 -05:00
Clement Tsang
702775f58d
refactor: use nonzero in mem data (#1673)
* refactor: use nonzerou64 for mem data

* clippy

* comment
2025-02-12 05:58:15 +00:00
Justin Martin
22fbd7d630
other: return None when mem_total is zero (#1667) 2025-02-07 19:02:07 -05:00
Clement Tsang
ec1a4cb7e5
refactor: move conversion code to utils (#1671) 2025-02-07 01:54:03 +00:00
Clement Tsang
a82d8578cf
fix/other: fix bug with network y-axis labels and cache height calculations (#1670)
* test

* implement network height calc caching
2025-02-06 20:29:06 -05:00
Clement Tsang
8ac03b5962
other: don't collect time series data in basic mode (#1669)
Ideally I also introduce a way to ensure basic mode widgets straight up
cannot accidentally access ts data, but this works for now.
2025-02-06 19:21:18 -05:00
Clement Tsang
43a4a36429
pkg: add completion files to winget/msi installer (#1666)
* pkg: add completion files to winget/msi installer

* hmmm

* hmmmm

* oh lol
2025-02-04 06:43:21 +00:00
Clement Tsang
f3a2067a78
docs: mention conda as an install option (#1665) 2025-02-03 19:20:23 -05:00
Clement Tsang
d6ad688ab8
refactor: use cow for disk widget io read/write strings (#1664) 2025-02-03 06:48:12 +00:00
Clement Tsang
837e23560f
refactor: points rework (v1) (#1663)
* refactor: add new method of storing timeseries data

* mostly finish adding data

* tmp

* migrate over to separate lib

* prepare to migrate over to new timeseries storage

* prepare to migrate frozen state

* migrate frozen state

* name

* migrate data collection

* migrate network

* fix some stuff

* fix a panic from bad pruning

* Fix pruning issues

* migrate RAM

* migrate swap

* migrate cache label

* refactor out to function

* migrate ram points

* migrate swap points

* migrate cache points

* migrate arc

* migrate gpu, remove a bunch of state code around force update

* rename cache, also some comments

* some temp cleanup

* migrate disk

* comments to remind me above fixmes, fix bug around time graph spans

* migrate load avg

* port temps

* style

* fix bug wiwth left edge gap

* partial migration of cpu, reorganize data file structure

* migrate cpu

* some cleanup

* fix bug with cpu widget + clippy

* start some small optimization work

* fix some things for some platforms

* refactor: rename data_collection to collection

* refactor: only process temp type in data eat step

* flatten components folder a bit

* partially migrate to new graph system and fix cpu bug

* driveby migration of process list to reduce allocs + more migration of points drawing

* revert the collection change

Forgot that I cut a new `Data` on each collection so that change was
useless.

* port over network stuff...

* fully migrate network, and fix some log bugs while we're at it

This is something I never noticed, but the log of 0 is inf - so there
were gaps in the lines when using log scaling!

* fix cpu colour in all mode

* clean up some disk table stuff
2025-02-03 06:34:58 +00:00
allcontributors[bot]
0aae119cfa
docs: add fgimian as a contributor for code, and doc (#1662)
* docs: update README.md [skip ci]

* docs: update .all-contributorsrc [skip ci]

---------

Co-authored-by: allcontributors[bot] <46447321+allcontributors[bot]@users.noreply.github.com>
2025-01-18 15:28:43 -05:00
Fotis Gimian
cf91c73b60
feature: make it possible to override generate dirs via env (#1658) 2025-01-18 15:27:24 -05:00
Clement Tsang
1edd8d81ed
deps: bump Cargo.toml and Cargo.lock dependencies (#1659)
* deps: bump dependencies

* update lockfile dependencies
2025-01-18 03:23:06 +00:00
Clement Tsang
c970037546
bug: handle terminal cleanup if main.rs panics from an Err (#1660)
* bug: handle terminal cleanup if main.rs panics from an Err

* add comment

* changelog
2025-01-18 02:43:58 +00:00
Clement Tsang
873434b4b7
other: fix non-applicable warning about regex creation in loop (#1661) 2025-01-18 02:30:00 +00:00
Clement Tsang
c9ffc41e51
deps: bump dependencies (#1656) 2025-01-07 00:44:48 -05:00
allcontributors[bot]
ee360f9391
docs: add al42and as a contributor for code (#1657)
* docs: update README.md [skip ci]

* docs: update .all-contributorsrc [skip ci]

---------

Co-authored-by: allcontributors[bot] <46447321+allcontributors[bot]@users.noreply.github.com>
2025-01-06 23:36:30 -05:00
Andrey Alekseenko
915c25a50e
other: handle systems with only libnvidia-ml.so.1 (#1655)
Recently, NVIDIA CUDA repository packages started shipping only
`libnvidia-ml.so.1` file, without `libnvidia-ml.so`. The upstream
`nvml-wrapper` package has a fix proposed
(https://github.com/Cldfire/nvml-wrapper/pull/63), yet the package is
in search of a maintainer at the moment.

To allow `bottom` to correctly detect NVIDIA GPUs on Ubuntu with
official NVIDIA packages, add a wrapper around `Nvml::init` to be more
persistent in its search for the NVML library.
2025-01-06 23:36:20 -05:00
Clement Tsang
dbda1ee56f
refactor: more data conversion cleanup (#1653)
* clean up some battery stuff

* dedupe battery from data conversion

* idk why we had a Value type alias

* clean up dupe load avg, and remove memory use percent from memharvest

* hmm

* nvm
2024-12-24 15:54:41 -05:00
Clement Tsang
cd6c60c054
refactor: remove battery conversion step (#1652)
* refactor: remove battery conversion step

* also fix a bug with margins in battery

* fixes
2024-12-22 22:54:19 -05:00
allcontributors[bot]
4a4d53dafb
docs: add Wateir as a contributor for doc (#1651)
* docs: update README.md [skip ci]

* docs: update .all-contributorsrc [skip ci]

---------

Co-authored-by: allcontributors[bot] <46447321+allcontributors[bot]@users.noreply.github.com>
2024-12-22 14:51:59 -05:00
Wateir
d738790c7a
Update README.md (#1650)
Avoid using sudo for AUR helper
2024-12-22 14:51:48 -05:00
Clement Tsang
603d8fe698
deps: bump lockfile as of 2024-12-20 (#1649) 2024-12-20 08:29:13 +00:00
Clement Tsang
3ca753f4b9
deps: bump root deps as of 2024-12-20 (#1648)
* deps: bump root deps as of 2024-12-20

* remove deprecated code
2024-12-20 03:17:27 -05:00
Clement Tsang
35662fc3c0
docs: update changelog for #1641 (#1647) 2024-12-20 08:02:23 +00:00
allcontributors[bot]
ccdf0b402b
docs: add yretenai as a contributor for code (#1646)
* docs: update README.md [skip ci]

* docs: update .all-contributorsrc [skip ci]

---------

Co-authored-by: allcontributors[bot] <46447321+allcontributors[bot]@users.noreply.github.com>
2024-12-20 02:48:47 -05:00
Ada Ahmed
479276bd53
feature: Support AMDGPU Data Collection (#1641)
* gpu: support amdgpu tracking

Co-authored-by: lvxnull2 <184518908+lvxnull2@users.noreply.github.com>

* gpu: dependency-free amdgpu parsing

gpu: fix clippy issues

Co-authored-by: lvxnull2 <184518908+lvxnull2@users.noreply.github.com>

* gpu: change memory usage percentage to be scaled to total memory instead of current memory usage

gpu: requested syntax changes

Co-authored-by: lvxnull2 <184518908+lvxnull2@users.noreply.github.com>

---------

Co-authored-by: lvxnull2 <184518908+lvxnull2@users.noreply.github.com>
2024-12-20 02:48:32 -05:00
Clement Tsang
797179b3dc
docs: update changelog for #1642 (#1644) 2024-12-05 23:55:25 +00:00
Clement Tsang
0d182e4b3a
feature: support custom widget borders (#1642)
* run a dep bump

* add widget border type

* feature: support custom widget borders

* fmt

* remove none since it looks really bad

* fix bug with title for tables with no title when expanded

* fix jsonschema

* fix some unused stuff
2024-12-05 06:52:55 +00:00
Clement Tsang
1fe17ddc21
ci: migrate FreeBSD release builds to GHA, update macOS and FreeBSD targets (#1640)
* ci: migrate FreeBSD release builds to GHA

* fix

* ci: bump macOS image from macos-12

* fix

* fix for nightly
2024-11-29 17:41:35 -05:00
Clement Tsang
70d0a6cbf7
ci: update jsonschema to 0.26.1 for schema validation (#1637)
* ci: update jsonschema to 0.26.1 for schema validation

* make sure to rerun schema validation
2024-11-29 20:29:30 +00:00
Clement Tsang
3597e0a9fd
ci: remove unused ci packaging script (#1639) 2024-11-29 20:27:37 +00:00
Clement Tsang
8f8c467f8b
docs: bump docs requirements.txt (#1638) 2024-11-29 20:19:04 +00:00
Clement Tsang
5b1163d29b
ci: clean up CI, update python action + version (#1636)
* ci: clean up CI

* bump python action, also version to 3.12
2024-11-29 20:16:43 +00:00
Clement Tsang
bc3032cf10
bug: fix incorrect versions in schemas (#1635) 2024-11-29 04:30:23 +00:00
Clement Tsang
ae0d350122
refactor: a bunch of cleanup of dead code and misc. stuff (#1634)
* refactor: lines

* shift around some stuff in Cargo.toml

* some docs

* some more cargo stuff

* clean up a bunch of stuff after making things less public

* clippy lints

* a lot more cleanup

* clippy

* fix some errors

* fix for windows
2024-11-28 22:42:17 +00:00
Clement Tsang
182c718d0e
bug: fix incorrect colours for gruvbox-light (#1633) 2024-11-28 19:27:15 +00:00
Clement Tsang
991cc3eed8
refactor: clean up some clippy lints from 1.83 (#1632) 2024-11-28 19:16:15 +00:00
Clement Tsang
24cb8a417c
refactor: move schema generation to its own binary, go back to lib-bin (#1630)
* refactor: separate schema generation to its own binary, go back to lib-bin setup

Decided it might be nicer to separate the schema generation bit to its
own binary. This does mean that we have to go back to the lib-bin
system, as otherwise passing shared code is _really_ hard.

* handle versioning

* run fmt
2024-11-28 08:05:25 +00:00
Clement Tsang
196d6d18c6
feature: add the ability to configure the disk widget's table columns (#1625)
* a bit of refactoring here...

* some refactoring, add columns

* cleanup

* add disk column feature

* update changelog
2024-11-18 02:28:20 +00:00
Clement Tsang
c8cba49463
other: add missing process column comment/schema description (#1623)
* add todo

* rerun schema
2024-11-14 10:34:05 +00:00
Clement Tsang
6d37d5756f
refactor: combine process column code (#1622)
* rename some files

* refactor: combine process column code

* rename some and sort the schema columns
2024-11-14 10:24:24 +00:00
Clement Tsang
103c4f6ab4
deps: bump various dependencies (#1621)
Bumps various dependencies, including ratatui
2024-11-14 09:23:07 +00:00
Clement Tsang
02b947dd2d
refactor: quick variable/struct/file rename (#1620)
Some renames and file movement. No functional changes.
2024-11-08 04:54:52 +00:00
Clement Tsang
ae14685913
refactor: clean up some unused serde code (#1619) 2024-11-08 04:13:07 +00:00
Clement Tsang
16a2fd6a41
deps: bump to ratatui 0.28 (#1618)
* deps: bump ratatui to 0.28, and crossterm to 0.28

* fix warnings
2024-11-03 16:11:00 +00:00
Clement Tsang
dc378ebd42
github: update bug report template around filesystem type (#1617) 2024-11-03 15:41:35 +00:00
Clement Tsang
4f92ffc1cc
deps: bump lock and some root deps (#1616) 2024-11-03 10:37:02 -05:00
Clement Tsang
776f8cb3d3
refactor: bump 'msrv' to 1.81 and update deprecated code (#1615)
* refactor: ignore warning for deprecated panic hook from Rust 1.82.0

* refactor: bump 'msrv' to 1.81 and update deprecated code

* some more cleanup

* even more cleanup
2024-11-01 17:51:12 +00:00
Clement Tsang
f2e329b00a
docs: bump docs requirements.txt (#1609) 2024-10-16 00:25:43 +00:00
Clement Tsang
76fb7598e9
deps: bump lockfile deps (#1608)
* deps: bump lockfile deps

* left one
2024-10-15 02:03:13 +00:00
Clement Tsang
318ed9fd6f
deps: bump starship-battery to 0.10.0 (#1607) 2024-10-14 21:53:44 -04:00
Clement Tsang
4189ae0935
deps: bump a few root deps (#1606) 2024-10-15 01:00:57 +00:00
Clement Tsang
ca6ee28fb1
ci: fix nightly job not skipping if no change (#1601) 2024-09-21 02:09:16 -04:00
Clement Tsang
5b3803f905
docs: update changelog for #1596 (#1599) 2024-09-16 05:38:21 +00:00
allcontributors[bot]
d8a83cdf90
docs: add llc0930 as a contributor for code (#1597)
* docs: update README.md [skip ci]

* docs: update .all-contributorsrc [skip ci]

---------

Co-authored-by: allcontributors[bot] <46447321+allcontributors[bot]@users.noreply.github.com>
2024-09-13 14:39:31 -04:00
llc0930
fe25055cc1
bug: fix support for nilfs2 file system (#1596)
Fix the problem that the nilfs2 file system partition is not displayed in the disk list.
2024-09-13 14:39:20 -04:00
Clement Tsang
4e47f9b51a
bug: fix incorrect default config definitions for chart legends (#1594)
I had changed how this was parsed in-code but I forgot to update the default configs. This also adds some e2e tests to hopefully catch this all for real in the future, since the schema ones don't catch this stuff and the constants test doesn't actually run the binary for a proper e2e test.
2024-09-12 09:51:23 +00:00
Clement Tsang
3edf430908
bug: fix using 'none' for chart legend position in configs (#1593)
* bug: fix using 'none' for legend position in configs

* forgot memory oops

* update changelog
2024-09-12 05:23:20 -04:00
allcontributors[bot]
eaa56238be
docs: add jasongwartz as a contributor for doc (#1589)
* docs: update README.md [skip ci]

* docs: update .all-contributorsrc [skip ci]

---------

Co-authored-by: allcontributors[bot] <46447321+allcontributors[bot]@users.noreply.github.com>
2024-09-09 10:16:18 -04:00
Jason Gwartz
35a7eca134
docs: Update demo caption from "--color" to "--theme" (#1588)
As per [the changelog](d20dc49c95/CHANGELOG.md (L90)), `--color` was replaced by `--theme`. This updates the screenshot comment in the README to reflect this change.
2024-09-09 10:16:09 -04:00
Clement Tsang
d20dc49c95
deps: bump a few root deps (#1587) 2024-09-09 02:24:51 +00:00
Clement Tsang
7678c46f42
docs: update README (#1585)
* docs: update README

* also fix ci if
2024-09-06 02:15:00 +00:00
Clement Tsang
2e5000e399
ci: merge mock + init job in nightly (#1584)
* ci: merge mock + init job in nightly

* also slightly bump timeout as a safeguard
2024-09-04 22:56:48 -04:00
Clement Tsang
6c42770b5e
ci: fix a few actions (#1583)
A few small things:

- Tweak timeouts
- Disable audit workflow as codecov mostly handles it now
- Fix mock check in nightly
2024-09-05 02:08:01 +00:00
Clement Tsang
c9a99886a5
deps: bump a few root deps as of 2024-09-03 (#1582)
* deps: bump a few root deps as of 2024-09-03

I didn't touch the TUI ones (crossterm, ratatui) or sysinfo for now.

* also run cargo update
2024-09-03 21:58:34 -04:00
Clement Tsang
97358d09c3
ci: fix CI pass check conditions (#1581)
* ci: fix CI pass check conditions

* also disable test because it's borked for some things
2024-09-03 23:58:07 +00:00
Clement Tsang
78879fc068
docs: update changelog (#1580) 2024-09-03 19:46:28 -04:00
Clement Tsang
1a715206be
ci: try using GHA instead of Cirrus for FreeBSD in basic CI (#1577)
Ideally we minimize our usage of Cirrus CI, especially for typical PR CI workflows, since it's a bit cludgy to work with. This method is also more extendable to things like OpenBSD.

Fine for deploys I guess since that's not super frequent and at this point I have that working fairly well when automated + I don't usually have to wait for it.
2024-09-03 08:33:13 +00:00
allcontributors[bot]
21a09fd6bc
docs: add stephen-huan as a contributor for code (#1579)
* docs: update README.md [skip ci]

* docs: update .all-contributorsrc [skip ci]

---------

Co-authored-by: allcontributors[bot] <46447321+allcontributors[bot]@users.noreply.github.com>
2024-09-03 03:27:26 -04:00
Stephen Huan
7c35def686
fix: selected text bg in default-light theme (#1578) 2024-09-03 03:27:03 -04:00
Clement Tsang
c63574dc78
deps: bump some CI actions as of 2024-09-01 (#1576)
* deps: bump some CI actions as of 2024-09-01

* missed one
2024-09-01 21:23:07 -04:00
Clement Tsang
2c03525945
other: regenerate the sample default config to match 0.10.2 (#1573)
The default sample config was outdated.
2024-08-27 18:10:31 -04:00
Clement Tsang
a095e67179
change: default config location on macOS considers XDG config var (#1570)
Actually support $XDG_CONFIG_HOME on macOS. Apparently in our docs we also say we do, but we, uh, don't, because dirs doesn't.

Note this is backwards-compatible, in that if a config file exists in the old default locations, we will check those first.
2024-08-22 01:00:55 +00:00
allcontributors[bot]
bb6a996c42
docs: add woodsb02 as a contributor for doc (#1567)
* docs: update README.md [skip ci]

* docs: update .all-contributorsrc [skip ci]

---------

Co-authored-by: allcontributors[bot] <46447321+allcontributors[bot]@users.noreply.github.com>
2024-08-19 10:56:58 -04:00
Ben Woods
74ae124fcc
docs: Add Terra installation instructions (#1566)
* README.md: Add Terra RPM repo instructions

* update some wording

---------

Co-authored-by: Clement Tsang <34804052+ClementTsang@users.noreply.github.com>
2024-08-19 10:56:47 -04:00
Zeb Piasecki
cbe27997bd
fix: add extra row for basic cpu widget if using avg row on cores % 4 != 0 (#1565) 2024-08-19 10:51:03 -04:00
Clement Tsang
5a009987ac
docs: update docs around disable_gpu change. (#1562)
* docs: update changelog

* update docs
2024-08-14 20:27:32 -04:00
shurizzle
6b0a285541
refactor: rename flags.enable_gpu to flags.disable_gpu (false by default) (#1559)
Co-authored-by: shurizzle <me@shurizzle.dev>
2024-08-14 18:22:47 -04:00
Clement Tsang
1f011bd918
docs: update doc about mkdocs, changelog, and versioning (#1561)
* docs: update doc about mkdocs

* docs: more readme docs wording

* add some details about versioning
2024-08-13 22:44:32 -04:00
Clement Tsang
277a30bca5
docs: update changelog to mention the change to enable_gpu (#1560) 2024-08-13 22:31:41 -04:00
Clement Tsang
d9d9e1df9f
other: show N/A for Nvidia GPUs if we detect one but can't get temps (#1557)
* other: show N/A for Nvidia GPUs if we detect one but can't get the temperature

* refactor: driveby refactor of filter system and code for temp

* missed one
2024-08-11 17:20:07 -04:00
Clement Tsang
c65121c43a
refactor: clean up some config and panic code (#1556)
* clean up some code

* refactor cancellation system to a separate cancellation token struct and clean up panic code
2024-08-11 02:04:44 -04:00
Clement Tsang
96ed26d87a
other: add another test to validate default config (#1553)
* update changelog

* add another lib test to make sure valid integration configs are actually valid

* only test these on default config

* clippy

* add extra CI fail check

* fix windows
2024-08-08 04:44:48 -04:00
Clement Tsang
cf47cb9fae
other: add test to make sure default config is valid (#1552) 2024-08-08 01:36:42 +00:00
Frederick Zhang
4c8367225a
fix: missing parent section names in TOML (#1551) 2024-08-07 12:59:44 -04:00
Clement Tsang
218d1899fc
docs: fix Debian package links (#1550)
Fixes .deb package links in examples.
2024-08-06 11:17:42 -04:00
Clement Tsang
537a67152f
docs: fix all-contributors chart alignment (#1548) 2024-08-06 00:33:01 -04:00
Clement Tsang
53079c698a
docs: fix dupe in all-contributors chart (#1547) 2024-08-06 00:32:01 -04:00
196 changed files with 7147 additions and 5745 deletions

View File

@ -490,6 +490,115 @@
"contributions": [
"code"
]
},
{
"login": "woodsb02",
"name": "Ben Woods",
"avatar_url": "https://avatars.githubusercontent.com/u/7113557?v=4",
"profile": "https://www.woods.am/",
"contributions": [
"doc"
]
},
{
"login": "stephen-huan",
"name": "Stephen Huan",
"avatar_url": "https://avatars.githubusercontent.com/u/20411956?v=4",
"profile": "http://cgdct.moe",
"contributions": [
"code"
]
},
{
"login": "jasongwartz",
"name": "Jason Gwartz",
"avatar_url": "https://avatars.githubusercontent.com/u/10981911?v=4",
"profile": "https://github.com/jasongwartz",
"contributions": [
"doc"
]
},
{
"login": "llc0930",
"name": "llc0930",
"avatar_url": "https://avatars.githubusercontent.com/u/14966910?v=4",
"profile": "https://github.com/llc0930",
"contributions": [
"code"
]
},
{
"login": "yretenai",
"name": "Ada Ahmed",
"avatar_url": "https://avatars.githubusercontent.com/u/614231?v=4",
"profile": "https://chronovore.dev",
"contributions": [
"code"
]
},
{
"login": "Wateir",
"name": "Wateir",
"avatar_url": "https://avatars.githubusercontent.com/u/78731687?v=4",
"profile": "https://github.com/Wateir",
"contributions": [
"doc"
]
},
{
"login": "al42and",
"name": "Andrey Alekseenko",
"avatar_url": "https://avatars.githubusercontent.com/u/933873?v=4",
"profile": "https://github.com/al42and",
"contributions": [
"code"
]
},
{
"login": "fgimian",
"name": "Fotis Gimian",
"avatar_url": "https://avatars.githubusercontent.com/u/1811813?v=4",
"profile": "http://fgimian.github.io/",
"contributions": [
"code",
"doc"
]
},
{
"login": "SigmaSquadron",
"name": "Fernando Rodrigues",
"avatar_url": "https://avatars.githubusercontent.com/u/174749595?v=4",
"profile": "https://sigmasquadron.net",
"contributions": [
"doc"
]
},
{
"login": "mtoohey31",
"name": "Matthew Toohey",
"avatar_url": "https://avatars.githubusercontent.com/u/36740602?v=4",
"profile": "https://mtoohey.com",
"contributions": [
"code"
]
},
{
"login": "win8linux",
"name": "Julius Enriquez",
"avatar_url": "https://avatars.githubusercontent.com/u/11584387?v=4",
"profile": "https://meander.site",
"contributions": [
"doc"
]
},
{
"login": "benjamb",
"name": "Ben Brown",
"avatar_url": "https://avatars.githubusercontent.com/u/8291297?v=4",
"profile": "https://github.com/benjamb",
"contributions": [
"code"
]
}
],
"contributorsPerLine": 7,

View File

@ -39,35 +39,6 @@ env:
CARGO_PROFILE_DEV_DEBUG: "0"
CARGO_HUSKY_DONT_INSTALL_HOOKS: "true"
test_task:
auto_cancellation: "false" # We set this to false to prevent nightly builds from affecting this
only_if: $CIRRUS_BUILD_SOURCE != "api" && ($CIRRUS_BRANCH == "main" || $CIRRUS_PR != "")
timeout_in: "15m"
skip: "!changesInclude('.cargo/**', 'sample_configs/**', 'scripts/cirrus/**', 'src/**', 'tests/**', '.cirrus.yml', 'build.rs', 'Cargo.lock', 'Cargo.toml', 'clippy.toml', 'rustfmt.toml')"
matrix:
- name: "FreeBSD 14 Test"
freebsd_instance:
image_family: freebsd-14-0
- name: "FreeBSD 13 Test"
freebsd_instance:
image_family: freebsd-13-3
<<: *SETUP_TEMPLATE
<<: *CACHE_TEMPLATE
test_no_feature_script:
- . $HOME/.cargo/env
- cargo fmt --all -- --check
- cargo test --no-run --locked --no-default-features
- cargo test --no-fail-fast --no-default-features -- --nocapture --quiet
- cargo clippy --all-targets --workspace --no-default-features -- -D warnings
test_all_feature_script:
- . $HOME/.cargo/env
- cargo fmt --all -- --check
- cargo test --no-run --locked --all-features
- cargo test --no-fail-fast --all-features -- --nocapture --quiet
- cargo clippy --all-targets --workspace --all-features -- -D warnings
<<: *CLEANUP_TEMPLATE
release_task:
auto_cancellation: "false"
only_if: $CIRRUS_BUILD_SOURCE == "api" && $BTM_BUILD_RELEASE_CALLER == "ci"
@ -78,22 +49,6 @@ release_task:
MANPAGE_DIR: "target/tmp/bottom/manpage/"
# -PLACEHOLDER FOR CI-
matrix:
- name: "FreeBSD 14 Build"
alias: "freebsd_14_0_build"
freebsd_instance:
image_family: freebsd-14-0
env:
TARGET: "x86_64-unknown-freebsd"
NAME: "x86_64-unknown-freebsd-14-0"
- name: "FreeBSD 13 Build"
alias: "freebsd_13_3_build"
freebsd_instance:
image_family: freebsd-13-3
env:
TARGET: "x86_64-unknown-freebsd"
NAME: "x86_64-unknown-freebsd-13-3"
- name: "Legacy Linux (2.17)"
alias: "linux_2_17_build"
container:

View File

@ -51,8 +51,10 @@ body:
- type: dropdown
id: filesystem
validations:
required: false
attributes:
label: What filesystem(s) are you using?
label: (Optional) What filesystem(s) are you using?
description: >
If you know, please select what filesystem(s) you are using on the system that is experiencing the problem. This
can be especially helpful if the issue is related to either the disk or memory widgets.

View File

@ -1,35 +0,0 @@
# A routine check to see if there are any Rust-specific security vulnerabilities in the repo we should be aware of.
name: audit
on:
workflow_dispatch:
schedule:
- cron: "0 0 * * 1"
jobs:
audit:
timeout-minutes: 18
runs-on: ubuntu-latest
steps:
- name: Checkout repository
uses: actions/checkout@b4ffde65f46336ab88eb53be808477a3936bae11 # v4.1.1
- name: Set up Rust toolchain
uses: dtolnay/rust-toolchain@21dc36fb71dd22e3317045c0c31a3f4249868b17
with:
toolchain: stable
- name: Enable Rust cache
uses: Swatinem/rust-cache@9bdad043e88c75890e36ad3bbc8d27f0090dd609 # 2.7.3
with:
cache-targets: false
cache-all-crates: true
cache-on-failure: true
- name: Install cargo-audit
run: |
cargo install cargo-audit --locked
rm -rf ~/.cargo/registry || echo "no registry to delete"
- uses: rustsec/audit-check@dd51754d4e59da7395a4cd9b593f0ff2d61a9b95 # v1.4.1
with:
token: ${{ secrets.GITHUB_TOKEN }}

View File

@ -1,11 +1,10 @@
# Builds the following releases:
# - Binary releases
# - Binaries
# - Binaries via VMs
# - Cirrus binaries (currently just Linux 2.17)
# - MSI installer for Windows (.msi)
# - .deb releases
# - .rpm releases
# - MSI installer for Windows (.msi)
# - Cirrus CI binaries
# - FreeBSD (x86_64)
# - macOS (aarch64)
name: "build releases"
@ -38,13 +37,15 @@ jobs:
name: "Build binaries"
runs-on: ${{ matrix.info.os }}
container: ${{ matrix.info.container }}
timeout-minutes: 30
timeout-minutes: 12
strategy:
fail-fast: false
matrix:
info:
# ======= Supported targets =======
# Linux (x86-64, x86, aarch64)
#
# TODO: In the future, when ARM runners are available on github, switch ARM targets off of cross.
- {
os: "ubuntu-20.04",
target: "x86_64-unknown-linux-gnu",
@ -78,7 +79,7 @@ jobs:
}
# macOS (x86-64 and aarch64)
- { os: "macos-12", target: "x86_64-apple-darwin", cross: false }
- { os: "macos-13", target: "x86_64-apple-darwin", cross: false }
- { os: "macos-14", target: "aarch64-apple-darwin", cross: false }
# Windows (x86-64, x86)
@ -120,16 +121,14 @@ jobs:
target: "riscv64gc-unknown-linux-gnu",
cross: true,
}
# Seems like cross' FreeBSD image is a bit broken? I
# get build errors, may be related to this issue:
# https://github.com/cross-rs/cross/issues/1291
steps:
- name: Checkout repository
if: matrix.info.container == ''
uses: actions/checkout@b4ffde65f46336ab88eb53be808477a3936bae11 # v4.1.1
with:
fetch-depth: 1
- name: Checkout repository (non-GitHub container)
if: matrix.info.container != ''
uses: actions/checkout@c85c95e3d7251135ab7dc9ce3241c5835cc595a9 # v3.5.3
uses: actions/checkout@692973e3d937129bcbf40652eb9f2f61becf3332 # v4.1.7
with:
fetch-depth: 1
@ -179,11 +178,11 @@ jobs:
shell: bash
run: |
cp target/${{ matrix.info.target }}/release/btm ./btm
tar -czvf bottom_${{ matrix.info.target }}${{ matrix.info.suffix }}.tar.gz btm completion
echo "ASSET=bottom_${{ matrix.info.target }}${{ matrix.info.suffix }}.tar.gz" >> $GITHUB_ENV
tar -czvf bottom_${{ matrix.info.target }}.tar.gz btm completion
echo "ASSET=bottom_${{ matrix.info.target }}.tar.gz" >> $GITHUB_ENV
- name: Generate artifact attestation for file
uses: actions/attest-build-provenance@v1
uses: actions/attest-build-provenance@6149ea5740be74af77f260b9db67e633f6b0a9a1 # v1.4.2
with:
subject-path: ${{ env.ASSET }}
@ -218,16 +217,96 @@ jobs:
uses: actions/upload-artifact@26f96dfa697d77e81fd5907df203aa23a56210a8 # v4.3.0
with:
retention-days: 3
name: "release-${{ matrix.info.target }}${{ matrix.info.suffix }}"
name: "release-${{ matrix.info.target }}"
path: release
build-vm:
name: "Build binaries via VMs"
runs-on: "ubuntu-latest"
timeout-minutes: 12
strategy:
fail-fast: false
matrix:
info:
# Seems like cross' FreeBSD image is a bit broken? I get build errors, may be related to this issue:
# https://github.com/cross-rs/cross/issues/1291
#
# Alas, that's why we do it with VMs.
- {
type: "freebsd",
os_release: "15.0",
target: "x86_64-unknown-freebsd",
}
- {
type: "freebsd",
os_release: "14.1",
target: "x86_64-unknown-freebsd",
}
- {
type: "freebsd",
os_release: "13.3",
target: "x86_64-unknown-freebsd",
}
steps:
- name: Checkout repository
if: matrix.info.container == ''
uses: actions/checkout@692973e3d937129bcbf40652eb9f2f61becf3332 # v4.1.7
with:
fetch-depth: 1
- name: Build (FreeBSD)
if: ${{ matrix.info.type == 'freebsd' }}
uses: vmactions/freebsd-vm@debf37ca7b7fa40e19c542ef7ba30d6054a706a4 # v1.1.5
with:
release: "${{ matrix.info.os_release }}"
envs: "RUST_BACKTRACE CARGO_INCREMENTAL CARGO_PROFILE_DEV_DEBUG CARGO_HUSKY_DONT_INSTALL_HOOKS COMPLETION_DIR MANPAGE_DIR"
usesh: true
prepare: |
pkg install -y curl bash
curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs --output rustup.sh
sh rustup.sh --default-toolchain stable -y
run: |
. "$HOME/.cargo/env"
BTM_GENERATE=true BTM_BUILD_RELEASE_CALLER=${{ inputs.caller }} cargo build --release --verbose --locked --target=${{ matrix.info.target }} --features deploy
- name: Move automatically generated completion/manpage
shell: bash
run: |
mv "$COMPLETION_DIR" completion
mv "$MANPAGE_DIR" manpage
- name: Bundle release and completion
shell: bash
run: |
cp target/${{ matrix.info.target }}/release/btm ./btm
tar -czvf bottom_${{ matrix.info.target }}-${{ matrix.info.os_release }}.tar.gz btm completion
echo "ASSET=bottom_${{ matrix.info.target }}-${{ matrix.info.os_release }}.tar.gz" >> $GITHUB_ENV
- name: Generate artifact attestation for file
uses: actions/attest-build-provenance@6149ea5740be74af77f260b9db67e633f6b0a9a1 # v1.4.2
with:
subject-path: ${{ env.ASSET }}
- name: Create release directory for artifact, move file
shell: bash
run: |
mkdir release
mv ${{ env.ASSET }} release/
- name: Save release as artifact
uses: actions/upload-artifact@26f96dfa697d77e81fd5907df203aa23a56210a8 # v4.3.0
with:
retention-days: 3
name: "release-${{ matrix.info.target }}-${{ matrix.info.os_release }}"
path: release
build-msi:
name: "Build MSI installer"
name: "Build MSI (WiX) installer"
runs-on: "windows-2019"
timeout-minutes: 30
timeout-minutes: 12
steps:
- name: Checkout repository
uses: actions/checkout@b4ffde65f46336ab88eb53be808477a3936bae11 # v4.1.1
uses: actions/checkout@692973e3d937129bcbf40652eb9f2f61becf3332 # v4.1.7
with:
fetch-depth: 1
@ -246,16 +325,20 @@ jobs:
toolchain: stable
target: x86_64-pc-windows-msvc
- name: Install cargo-wix
shell: powershell
run: |
cargo install cargo-wix --version 0.3.8 --locked
- name: Build MSI file
shell: powershell
env:
BTM_GENERATE: ""
BTM_GENERATE: true
run: |
cargo install cargo-wix --version 0.3.8 --locked
cargo wix
cargo wix --nocapture
- name: Generate artifact attestation for file
uses: actions/attest-build-provenance@v1
uses: actions/attest-build-provenance@6149ea5740be74af77f260b9db67e633f6b0a9a1 # v1.4.2
with:
subject-path: "bottom_x86_64_installer.msi"
@ -275,10 +358,10 @@ jobs:
build-cirrus:
name: "Build using Cirrus CI"
runs-on: "ubuntu-latest"
timeout-minutes: 30
timeout-minutes: 12
steps:
- name: Checkout repository
uses: actions/checkout@b4ffde65f46336ab88eb53be808477a3936bae11 # v4.1.1
uses: actions/checkout@692973e3d937129bcbf40652eb9f2f61becf3332 # v4.1.7
with:
fetch-depth: 0
@ -299,7 +382,7 @@ jobs:
python ./scripts/cirrus/release.py "$BRANCH" "release/" "${{ inputs.caller }}"
- name: Generate artifact attestation for file
uses: actions/attest-build-provenance@v1
uses: actions/attest-build-provenance@6149ea5740be74af77f260b9db67e633f6b0a9a1 # v1.4.2
with:
subject-path: "release/**/*.tar.gz"
@ -313,10 +396,11 @@ jobs:
build-deb:
name: "Build .deb software packages"
runs-on: "ubuntu-20.04"
timeout-minutes: 30
timeout-minutes: 12
strategy:
fail-fast: false
matrix:
# TODO: In the future, when ARM runners are available on github, switch ARM targets off of cross.
info:
- { target: "x86_64-unknown-linux-gnu", dpkg: amd64 }
- { target: "x86_64-unknown-linux-musl", cross: true, dpkg: amd64 }
@ -346,7 +430,7 @@ jobs:
}
steps:
- name: Checkout repository
uses: actions/checkout@b4ffde65f46336ab88eb53be808477a3936bae11 # v4.1.1
uses: actions/checkout@692973e3d937129bcbf40652eb9f2f61becf3332 # v4.1.7
with:
fetch-depth: 1
@ -420,7 +504,7 @@ jobs:
rm -r ./target/${{ matrix.info.target }}/debian/
- name: Generate artifact attestation for file
uses: actions/attest-build-provenance@v1
uses: actions/attest-build-provenance@6149ea5740be74af77f260b9db67e633f6b0a9a1 # v1.4.2
with:
subject-path: ${{ steps.verify.outputs.DEB_FILE }}
@ -441,7 +525,7 @@ jobs:
name: "Build .rpm software packages"
runs-on: ubuntu-latest
container: ghcr.io/clementtsang/almalinux-8
timeout-minutes: 30
timeout-minutes: 12
strategy:
fail-fast: false
matrix:
@ -450,7 +534,7 @@ jobs:
- { target: "x86_64-unknown-linux-musl", cross: true }
steps:
- name: Checkout repository
uses: actions/checkout@b4ffde65f46336ab88eb53be808477a3936bae11 # v4.1.1
uses: actions/checkout@692973e3d937129bcbf40652eb9f2f61becf3332 # v4.1.7
with:
fetch-depth: 1
@ -510,7 +594,7 @@ jobs:
rm -r ./target/${{ matrix.info.target }}/generate-rpm/
- name: Generate artifact attestation for file
uses: actions/attest-build-provenance@v1
uses: actions/attest-build-provenance@6149ea5740be74af77f260b9db67e633f6b0a9a1 # v1.4.2
with:
subject-path: ${{ steps.verify.outputs.RPM_FILE }}

View File

@ -1,5 +1,5 @@
# Main CI workflow to validate PRs and branches are correctly formatted
# and pass tests.
# Main CI workflow to validate that files are formatted correctly, pass tests,
# and pass lints.
#
# CI workflow was based on a lot of work from other people:
# - https://github.com/heim-rs/heim/blob/master/.github/workflows/ci.yml
@ -8,16 +8,12 @@
# - https://matklad.github.io/2021/09/04/fast-rust-builds.html
#
# Supported platforms run the following tasks:
# - cargo fmt
# - cargo test (built/test in separate steps)
# - cargo clippy (apparently faster to do it after the build/test)
# - Format
# - Test (built/test in separate steps)
# - Clippy (apparently faster to do it after the build/test)
#
# Unsupported platforms run the following tasks:
# - cargo build
#
# Note that not all platforms are tested using this CI action! There are some
# tested by Cirrus CI due to (free) platform limitations on GitHub. Currently,
# this is just macOS M1 and FreeBSD.
# - Clippy
name: ci
@ -57,12 +53,12 @@ jobs:
# Runs rustfmt + tests + clippy on the main supported platforms.
#
# Note that m1 macOS is tested via CirrusCI.
# TODO: In the future, when ARM runners are available on github, switch ARM targets off of cross.
supported:
needs: pre-job
if: ${{ needs.pre-job.outputs.should_skip != 'true' }}
runs-on: ${{ matrix.info.os }}
timeout-minutes: 18
timeout-minutes: 12
strategy:
fail-fast: false
matrix:
@ -77,7 +73,7 @@ jobs:
target: "aarch64-unknown-linux-gnu",
cross: true,
}
- { os: "macos-12", target: "x86_64-apple-darwin", cross: false }
- { os: "macos-13", target: "x86_64-apple-darwin", cross: false }
- { os: "macos-14", target: "aarch64-apple-darwin", cross: false }
- {
os: "windows-2019",
@ -87,7 +83,7 @@ jobs:
features: ["--all-features", "--no-default-features"]
steps:
- name: Checkout repository
uses: actions/checkout@b4ffde65f46336ab88eb53be808477a3936bae11 # v4.1.1
uses: actions/checkout@692973e3d937129bcbf40652eb9f2f61becf3332 # v4.1.7
- name: Set up Rust toolchain
uses: dtolnay/rust-toolchain@21dc36fb71dd22e3317045c0c31a3f4249868b17
@ -137,13 +133,14 @@ jobs:
RUST_BACKTRACE: full
# Try running cargo build on all other platforms.
#
# TODO: Maybe some of these should be allowed to fail? If so, I guess we can add back the "unofficial" MSRV,
# I would also put android there.
other-check:
needs: pre-job
runs-on: ${{ matrix.info.os }}
if: ${{ needs.pre-job.outputs.should_skip != 'true' }}
timeout-minutes: 20
timeout-minutes: 12
strategy:
fail-fast: false
matrix:
@ -180,8 +177,8 @@ jobs:
rust: "beta",
}
- {
os: "macos-12",
target: "x86_64-apple-darwin",
os: "macos-14",
target: "aarch64-apple-darwin",
cross: false,
rust: "beta",
}
@ -214,6 +211,7 @@ jobs:
}
# Risc-V 64gc
# Note: seems like this breaks with tests?
- {
os: "ubuntu-latest",
target: "riscv64gc-unknown-linux-gnu",
@ -227,17 +225,19 @@ jobs:
cross: true,
cross-version: "git:cabfc3b02d1edec03869fabdabf6a7f8b0519160",
no-default-features: true,
no-clippy: true,
}
steps:
- name: Checkout repository
uses: actions/checkout@b4ffde65f46336ab88eb53be808477a3936bae11 # v4.1.1
uses: actions/checkout@692973e3d937129bcbf40652eb9f2f61becf3332 # v4.1.7
- name: Set up Rust toolchain
uses: dtolnay/rust-toolchain@21dc36fb71dd22e3317045c0c31a3f4249868b17
with:
toolchain: ${{ matrix.info.rust || 'stable' }}
target: ${{ matrix.info.target }}
components: "clippy"
- name: Enable Rust cache
uses: Swatinem/rust-cache@9bdad043e88c75890e36ad3bbc8d27f0090dd609 # 2.7.3
@ -246,37 +246,100 @@ jobs:
key: ${{ matrix.info.target }}
cache-all-crates: true
- name: Try building with only default features enabled
- name: Clippy (default features)
uses: ClementTsang/cargo-action@v0.0.5
if: ${{ matrix.info.no-default-features != true }}
with:
command: build
args: --all-targets --verbose --target=${{ matrix.info.target }} --locked
command: clippy
args: --all-targets --workspace --target=${{ matrix.info.target }} --locked
use-cross: ${{ matrix.info.cross }}
cross-version: ${{ matrix.info.cross-version || '0.2.5' }}
- name: Try building with no features enabled
- name: Clippy (no features enabled)
uses: ClementTsang/cargo-action@v0.0.5
if: ${{ matrix.info.no-default-features == true }}
with:
command: build
args: --all-targets --verbose --target=${{ matrix.info.target }} --locked --no-default-features
command: clippy
args: --all-targets --workspace --target=${{ matrix.info.target }} --locked --no-default-features
use-cross: ${{ matrix.info.cross }}
cross-version: ${{ matrix.info.cross-version || '0.2.5' }}
vm-check:
name: "Test using VMs"
needs: pre-job
if: ${{ needs.pre-job.outputs.should_skip != 'true' }}
runs-on: "ubuntu-latest"
timeout-minutes: 15
strategy:
fail-fast: false
matrix:
info:
# Seems like cross' FreeBSD image is a bit broken? I get build errors, may be related to this issue:
# https://github.com/cross-rs/cross/issues/1291
#
# Alas, that's why we do it with VMs.
- {
type: "freebsd",
os_release: "15.0",
target: "x86_64-unknown-freebsd",
}
- {
type: "freebsd",
os_release: "14.1",
target: "x86_64-unknown-freebsd",
}
- {
type: "freebsd",
os_release: "13.3",
target: "x86_64-unknown-freebsd",
}
steps:
- name: Checkout repository
uses: actions/checkout@692973e3d937129bcbf40652eb9f2f61becf3332 # v4.1.7
with:
fetch-depth: 1
- name: Enable Rust cache
uses: Swatinem/rust-cache@9bdad043e88c75890e36ad3bbc8d27f0090dd609 # 2.7.3
if: ${{ github.event_name != 'pull_request' || ! github.event.pull_request.head.repo.fork }} # If it is a PR, only if not a fork
with:
key: ${{ matrix.info.target }}-${{ matrix.info.os_release }}
cache-all-crates: true
- name: Clippy (FreeBSD)
if: ${{ matrix.info.type == 'freebsd' }}
uses: vmactions/freebsd-vm@debf37ca7b7fa40e19c542ef7ba30d6054a706a4 # v1.1.5
with:
release: "${{ matrix.info.os_release }}"
envs: "RUST_BACKTRACE CARGO_INCREMENTAL CARGO_PROFILE_DEV_DEBUG CARGO_HUSKY_DONT_INSTALL_HOOKS"
usesh: true
prepare: |
pkg install -y curl bash
curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs --output rustup.sh
sh rustup.sh --default-toolchain stable -y
run: |
. "$HOME/.cargo/env"
cargo clippy --all-targets --workspace -- -D warnings
completion:
name: "CI Pass Check"
needs: [supported, other-check]
if: ${{ success() || failure() }}
needs: [supported, other-check, vm-check]
if: ${{ needs.supported.result != 'skipped' && needs.other-check.result != 'skipped' && needs.vm-check.result != 'skipped' }}
runs-on: "ubuntu-latest"
steps:
- name: CI Passed
if: ${{ (needs.supported.result == 'success' && needs.other-check.result == 'success') || (needs.supported.result == 'skipped' && needs.other-check.result == 'skipped') }}
if: ${{ needs.supported.result == 'success' && needs.other-check.result == 'success' && needs.vm-check.result == 'success' }}
run: |
echo "CI workflow completed successfully.";
- name: CI Failed
if: ${{ needs.supported.result == 'failure' && needs.other-check.result == 'failure' }}
if: ${{ needs.supported.result == 'failure' && needs.other-check.result == 'failure' && needs.vm-check.result == 'failure' }}
run: |
echo "CI workflow failed at some point.";
echo "CI workflow failed.";
exit 1;
- name: CI Cancelled
if: ${{ needs.supported.result == 'cancelled' && needs.other-check.result == 'cancelled' && needs.vm-check.result == 'cancelled' }}
run: |
echo "CI workflow was cancelled.";
exit 1;

View File

@ -22,7 +22,7 @@ jobs:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
steps:
- name: Checkout repository
uses: actions/checkout@b4ffde65f46336ab88eb53be808477a3936bae11 # v4.1.1
uses: actions/checkout@692973e3d937129bcbf40652eb9f2f61becf3332 # v4.1.7
with:
fetch-depth: 1

View File

@ -37,17 +37,17 @@ jobs:
needs: pre-job
if: ${{ needs.pre-job.outputs.should_skip != 'true' }}
runs-on: ${{ matrix.info.os }}
timeout-minutes: 18
timeout-minutes: 12
strategy:
fail-fast: false
matrix:
info:
- { os: "ubuntu-latest", target: "x86_64-unknown-linux-gnu" }
- { os: "macos-12", target: "x86_64-apple-darwin" }
- { os: "macos-14", target: "aarch64-apple-darwin", cross: false }
- { os: "windows-2019", target: "x86_64-pc-windows-msvc" }
steps:
- name: Checkout repository
uses: actions/checkout@b4ffde65f46336ab88eb53be808477a3936bae11 # v4.1.1
uses: actions/checkout@692973e3d937129bcbf40652eb9f2f61becf3332 # v4.1.7
- name: Set up Rust toolchain
uses: dtolnay/rust-toolchain@21dc36fb71dd22e3317045c0c31a3f4249868b17

View File

@ -54,7 +54,7 @@ jobs:
runs-on: ubuntu-latest
steps:
- name: Checkout repository
uses: actions/checkout@b4ffde65f46336ab88eb53be808477a3936bae11 # v4.1.1
uses: actions/checkout@692973e3d937129bcbf40652eb9f2f61becf3332 # v4.1.7
with:
fetch-depth: 1
@ -68,7 +68,7 @@ jobs:
echo "Release version: ${{ env.RELEASE_VERSION }}"
- name: Get release artifacts
uses: actions/download-artifact@6b208ae046db98c579e8a3aa621ab581ff575935 # v4.1.1
uses: actions/download-artifact@fa0a91b85d4f404e444e00e005971372dc801d16 # v4.1.8
with:
pattern: release-*
path: release
@ -106,7 +106,7 @@ jobs:
echo "Release version: ${{ env.RELEASE_VERSION }}"
- name: Get release artifacts
uses: actions/download-artifact@6b208ae046db98c579e8a3aa621ab581ff575935 # v4.1.1
uses: actions/download-artifact@fa0a91b85d4f404e444e00e005971372dc801d16 # v4.1.8
with:
pattern: release-*
path: release
@ -118,7 +118,7 @@ jobs:
du -h -d 0 ./release/*
- name: Create release and add release files
uses: softprops/action-gh-release@20e085ccc73308c2c8e43ab8da4f8d7ecbb94d4e # 2.0.1
uses: softprops/action-gh-release@c062e08bd532815e2082a85e87e3ef29c3e6d191 # 2.0.8
with:
token: ${{ secrets.GITHUB_TOKEN }}
prerelease: false

View File

@ -1,6 +1,7 @@
# Workflow to deploy mkdocs documentation.
name: docs
on:
workflow_dispatch:
push:
@ -21,13 +22,13 @@ jobs:
runs-on: ubuntu-latest
steps:
- name: Checkout repository
uses: actions/checkout@b4ffde65f46336ab88eb53be808477a3936bae11 # v4.1.1
uses: actions/checkout@692973e3d937129bcbf40652eb9f2f61becf3332 # v4.1.7
with:
fetch-depth: 0
- uses: actions/setup-python@0a5c61591373683505ea898e09a3ea4f39ef2b9c # v5.0.0
- uses: actions/setup-python@0b93645e9fea7318ecaed2b359559ac225c90a2b # v5.3.0
with:
python-version: 3.11
python-version: 3.12
- name: Install Python dependencies
run: pip install -r docs/requirements.txt

View File

@ -9,9 +9,10 @@ on:
workflow_dispatch:
inputs:
isMock:
description: "Replace with any word other than 'mock' to trigger a non-mock run."
default: "mock"
description: "Mock run"
default: true
required: false
type: boolean
env:
CARGO_INCREMENTAL: 0
@ -19,8 +20,9 @@ env:
CARGO_HUSKY_DONT_INSTALL_HOOKS: true
jobs:
# Check if things should be skipped.
pre-job:
# Check if things should be skipped, or if this is a mock job.
initialize-job:
name: initialize-job
runs-on: ubuntu-latest
outputs:
should_skip: ${{ steps.skip_check.outputs.should_skip }}
@ -32,18 +34,11 @@ jobs:
skip_after_successful_duplicate: "true"
do_not_skip: '["workflow_dispatch"]'
initialize-job:
name: initialize-job
needs: pre-job
if: ${{ needs.pre-job.outputs.should_skip != 'true' }}
runs-on: ubuntu-latest
steps:
- name: Check if mock
run: |
echo "${{ github.event.inputs.isMock }}";
if [[ -z "${{ github.event.inputs.isMock }}" ]]; then
echo "This is a scheduled nightly run."
elif [[ "${{ github.event.inputs.isMock }}" == "mock" ]]; then
elif [[ "${{ github.event.inputs.isMock }}" == "true" ]]; then
echo "This is a mock run."
else
echo "This is NOT a mock run. Watch for the generated files!"
@ -51,6 +46,7 @@ jobs:
build-release:
needs: initialize-job
if: ${{ needs.initialize-job.outputs.should_skip != 'true' }}
uses: ./.github/workflows/build_releases.yml
with:
caller: "nightly"
@ -62,12 +58,12 @@ jobs:
runs-on: ubuntu-latest
steps:
- name: Checkout repository
uses: actions/checkout@b4ffde65f46336ab88eb53be808477a3936bae11 # v4.1.1
uses: actions/checkout@692973e3d937129bcbf40652eb9f2f61becf3332 # v4.1.7
with:
fetch-depth: 1
- name: Get release artifacts
uses: actions/download-artifact@6b208ae046db98c579e8a3aa621ab581ff575935 # v4.1.1
uses: actions/download-artifact@fa0a91b85d4f404e444e00e005971372dc801d16 # v4.1.8
with:
pattern: release-*
path: release
@ -79,18 +75,18 @@ jobs:
du -h -d 0 ./release/*
- name: Delete tag and release if not mock
if: github.event.inputs.isMock != 'mock'
if: github.event.inputs.isMock != 'true'
run: gh release delete nightly --cleanup-tag
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
- name: Sleep for a few seconds to prevent timing issues between the deletion and creation of the release
run: sleep 10
if: github.event.inputs.isMock != 'mock'
if: github.event.inputs.isMock != 'true'
- name: Add all release files and create nightly release if not mock
uses: softprops/action-gh-release@20e085ccc73308c2c8e43ab8da4f8d7ecbb94d4e # 2.0.1
if: github.event.inputs.isMock != 'mock'
uses: softprops/action-gh-release@c062e08bd532815e2082a85e87e3ef29c3e6d191 # 2.0.8
if: github.event.inputs.isMock != 'true'
with:
token: ${{ secrets.GITHUB_TOKEN }}
prerelease: true

View File

@ -27,22 +27,18 @@ jobs:
version: ${{ env.VERSION }}
steps:
- name: Get the release version from the tag
if: env.VERSION == ''
run: |
if [[ -n "${{ github.event.inputs.tag }}" ]]; then
echo "Manual run against a tag; overriding actual tag in the environment..."
echo "VERSION=${{ github.event.inputs.tag }}" >> $GITHUB_ENV
echo "VERSION=${{ github.event.inputs.tag }}" >> "$GITHUB_ENV"
else
echo "VERSION=${{ github.event.release.tag_name }}" >> $GITHUB_ENV
echo "VERSION=${{ github.event.release.tag_name }}" >> "$GITHUB_ENV"
fi
- name: Test env
run: |
echo ${{ env.VERSION }}
- name: Make sure you're not on master/main/nightly
run: |
if [[ ${{ env.VERSION }} == "master" || ${{ env.VERSION }} == "main" || ${{ env.VERSION }} == "nightly" ]]; then
echo ${{ env.VERSION }}
if [[ ${{ env.VERSION }} == "master" || ${{ env.VERSION }} == "main" || ${{ env.VERSION }} == "nightly" ]]; then
exit 1
fi
@ -60,13 +56,13 @@ jobs:
echo "Release version: ${{ env.RELEASE_VERSION }}"
- name: Checkout repository
uses: actions/checkout@b4ffde65f46336ab88eb53be808477a3936bae11 # v4.1.1
uses: actions/checkout@692973e3d937129bcbf40652eb9f2f61becf3332 # v4.1.7
with:
fetch-depth: 0
- uses: actions/setup-python@0a5c61591373683505ea898e09a3ea4f39ef2b9c # v5.0.0
- uses: actions/setup-python@0b93645e9fea7318ecaed2b359559ac225c90a2b # v5.3.0
with:
python-version: 3.11
python-version: 3.12
- name: Install Python dependencies
run: pip install -r docs/requirements.txt
@ -120,7 +116,7 @@ jobs:
echo "Release version: ${{ env.RELEASE_VERSION }}"
- name: Automatically create PR for winget repos
uses: vedantmgoyal2009/winget-releaser@0db4f0a478166abd0fa438c631849f0b8dcfb99f
uses: vedantmgoyal2009/winget-releaser@4ffc7888bffd451b357355dc214d43bb9f23917e
with:
identifier: Clement.bottom
installers-regex: '^bottom_x86_64_installer\.msi$'

View File

@ -1,6 +1,7 @@
# Small CI workflow to test if mkdocs documentation can be successfully built.
name: test docs
on:
workflow_dispatch:
pull_request:
@ -29,13 +30,13 @@ jobs:
runs-on: ubuntu-latest
steps:
- name: Checkout repository
uses: actions/checkout@b4ffde65f46336ab88eb53be808477a3936bae11 # v4.1.1
uses: actions/checkout@692973e3d937129bcbf40652eb9f2f61becf3332 # v4.1.7
with:
fetch-depth: 0
- uses: actions/setup-python@0a5c61591373683505ea898e09a3ea4f39ef2b9c # v5.0.0
- uses: actions/setup-python@0b93645e9fea7318ecaed2b359559ac225c90a2b # v5.3.0
with:
python-version: 3.11
python-version: 3.12
- name: Install Python dependencies
run: pip install -r docs/requirements.txt

View File

@ -9,6 +9,7 @@ on:
- main
paths:
- "schema/**"
- "scripts/schema/**"
- ".github/workflows/validate_schema.yml"
concurrency:
@ -35,13 +36,13 @@ jobs:
runs-on: ubuntu-latest
steps:
- name: Checkout repository
uses: actions/checkout@b4ffde65f46336ab88eb53be808477a3936bae11 # v4.1.1
uses: actions/checkout@692973e3d937129bcbf40652eb9f2f61becf3332 # v4.1.7
with:
fetch-depth: 0
- uses: actions/setup-python@0a5c61591373683505ea898e09a3ea4f39ef2b9c # v5.0.0
- uses: actions/setup-python@0b93645e9fea7318ecaed2b359559ac225c90a2b # v5.3.0
with:
python-version: 3.11
python-version: 3.12
- name: Install Python dependencies
run: pip install -r scripts/schema/requirements.txt
@ -51,7 +52,6 @@ jobs:
python3 scripts/schema/validator.py -s ./schema/nightly/bottom.json -f ./sample_configs/default_config.toml
python3 scripts/schema/validator.py --uncomment -s ./schema/nightly/bottom.json -f ./sample_configs/default_config.toml
python3 scripts/schema/validator.py -s ./schema/nightly/bottom.json -f ./sample_configs/demo_config.toml
- name: Test nightly catches on a bad sample config
run: |

View File

@ -1,9 +1,54 @@
# Changelog
All notable changes to this project will be documented in this file.
All notable changes to this project will be documented in this file. The format is based on
[Keep a Changelog](https://keepachangelog.com/en/1.0.0/).
The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/),
and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html).
Versioning for this project is based on [Semantic Versioning](https://semver.org/spec/v2.0.0.html). More specifically:
**Pre 1.0.0 (current)**:
- Patch versions should aim to only contain bug fixes or non-breaking features/changes.
- Minor versions may break things.
**Post 1.0.0**:
- Patch versions should only contain bug fixes.
- Minor versions should only contain forward-compatible features/changes.
- Major versions may break things.
That said, these are more guidelines rather than hardset rules, though the project will generally try to follow them.
---
## [0.11.0] - Unreleased
### Features
- [#1625](https://github.com/ClementTsang/bottom/pull/1625): Add the ability to configure the disk widget's table columns.
- [#1641](https://github.com/ClementTsang/bottom/pull/1641): Support AMD GPU data collection on Linux.
- [#1642](https://github.com/ClementTsang/bottom/pull/1642): Support changing the widget borders.
### Bug Fixes
- [#1551](https://github.com/ClementTsang/bottom/pull/1551): Fix missing parent section names in default config.
- [#1552](https://github.com/ClementTsang/bottom/pull/1552): Fix typo in default config.
- [#1578](https://github.com/ClementTsang/bottom/pull/1578): Fix missing selected text background colour in `default-light` theme.
- [#1593](https://github.com/ClementTsang/bottom/pull/1593): Fix using `"none"` for chart legend position in configs.
- [#1594](https://github.com/ClementTsang/bottom/pull/1594): Fix incorrect default config definitions for chart legends.
- [#1596](https://github.com/ClementTsang/bottom/pull/1596): Fix support for nilfs2 file system.
- [#1660](https://github.com/ClementTsang/bottom/pull/1660): Fix properly cleaning up the terminal if the program is terminated due to an `Err` bubbling to the top.
- [#1663](https://github.com/ClementTsang/bottom/pull/1663): Fix network graphs using log scaling having broken lines when a point was 0.
- [#1683](https://github.com/ClementTsang/bottom/pull/1683): Fix graph lines potentially showing up behind legends.
### Changes
- [#1559](https://github.com/ClementTsang/bottom/pull/1559): Rename `--enable_gpu` to `--disable_gpu`, and make GPU features enabled by default.
- [#1570](https://github.com/ClementTsang/bottom/pull/1570): Consider `$XDG_CONFIG_HOME` on macOS when looking for a default config path in a backwards-compatible fashion.
- [#1686](https://github.com/ClementTsang/bottom/pull/1686): Allow hyphenated arguments to work as well (e.g. `--autohide-time`).
### Other
- [#1663](https://github.com/ClementTsang/bottom/pull/1663): Rework how data is stored internally, reducing memory usage a bit.
## [0.10.2] - 2024-08-05
@ -36,6 +81,7 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0
### Changes
- [#1276](https://github.com/ClementTsang/bottom/pull/1276): NVIDIA GPU functionality is now tied behind the `--enable_gpu` flag. This will likely be changed in the future.
- [#1344](https://github.com/ClementTsang/bottom/pull/1344): Change the `group` command line-argument to `group_processes` for consistency with the config file option.
- [#1376](https://github.com/ClementTsang/bottom/pull/1376): Group together related command-line arguments in `-h` and `--help`.
- [#1411](https://github.com/ClementTsang/bottom/pull/1411): Add `time` as a default column.
@ -60,6 +106,7 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0
- `--colors` is now `--theme`
- [#1513](https://github.com/ClementTsang/bottom/pull/1513): Table headers are now bold by default.
- [#1515](https://github.com/ClementTsang/bottom/pull/1515): Show the config path in the error message if unable to read/create a config.
- [#1682](https://github.com/ClementTsang/bottom/pull/1682): On Linux, temperature sensor labels now always have their first letter capitalized (e.g. "k10temp: tctl" -> "k10temp: Tctl").
### Bug Fixes

954
Cargo.lock generated

File diff suppressed because it is too large Load Diff

View File

@ -1,21 +1,20 @@
[package]
name = "bottom"
version = "0.10.2"
authors = ["Clement Tsang <cjhtsang@uwaterloo.ca>"]
edition = "2021"
version = "0.11.0"
repository = "https://github.com/ClementTsang/bottom"
keywords = ["cross-platform", "monitoring", "cli", "top", "tui"]
license = "MIT"
categories = ["command-line-utilities", "visualization"]
description = "A customizable cross-platform graphical process/system monitor for the terminal. Supports Linux, macOS, and Windows."
documentation = "https://clementtsang.github.io/bottom/stable"
readme = "README.md"
default-run = "btm"
build = "build.rs"
authors = ["Clement Tsang <cjhtsang@uwaterloo.ca>"]
keywords = ["cross-platform", "monitoring", "cli", "top", "tui"]
categories = ["command-line-utilities", "visualization"]
exclude = [
".cargo-husky/",
".github/",
".idea",
".idea/",
".vscode/",
"assets/",
"desktop/",
@ -37,23 +36,123 @@ exclude = [
"rust-toolchain.toml",
"rustfmt.toml",
]
edition = "2021"
# The oldest version I've tested that should still build - note this is not an official MSRV!
rust-version = "1.74.0"
rust-version = "1.81"
[[bin]]
name = "btm"
path = "src/main.rs"
[lib]
test = true
doctest = true
doc = true
[profile.dev.package."*"]
# Compile dependencies with optimizations on even in debug mode.
opt-level = 3
[[bin]]
name = "btm"
path = "src/bin/main.rs"
doc = false
[profile.no-opt]
inherits = "dev"
opt-level = 0
[[bin]]
name = "schema"
path = "src/bin/schema.rs"
test = false
doctest = false
doc = false
required-features = ["generate_schema"]
[features]
# Used for general builds.
battery = ["starship-battery"]
nvidia = ["nvml-wrapper"]
gpu = ["nvidia"]
zfs = []
deploy = ["battery", "gpu", "zfs"]
default = ["deploy"]
# Should not be included in builds.
logging = ["fern", "log", "time"]
generate_schema = ["schemars", "serde_json", "strum"]
[dependencies]
anyhow = "1.0.97"
backtrace = "0.3.74"
cfg-if = "1.0.0"
clap = { version = "4.5.32", features = ["default", "cargo", "wrap_help", "derive"] }
concat-string = "1.0.1"
crossterm = "0.28.1"
ctrlc = { version = "3.4.5", features = ["termination"] }
dirs = "6.0.0"
hashbrown = "0.15.2"
humantime = "2.2.0"
indexmap = "2.8.0"
indoc = "2.0.6"
itertools = "0.14.0"
nvml-wrapper = { version = "0.10.0", optional = true, features = ["legacy-functions"] }
regex = "1.11.1"
serde = { version = "1.0.219", features = ["derive"] }
starship-battery = { version = "0.10.1", optional = true }
sysinfo = "=0.30.13"
timeless = "0.0.14-alpha"
toml_edit = { version = "0.22.24", features = ["serde"] }
tui = { version = "0.29.0", package = "ratatui" }
unicode-ellipsis = "0.3.0"
unicode-segmentation = "1.12.0"
unicode-width = "0.2.0"
# Used for logging. Mostly a debugging tool.
fern = { version = "0.7.1", optional = true }
log = { version = "0.4.26", optional = true }
time = { version = "0.3.40", features = ["local-offset", "formatting", "macros"], optional = true }
# These are just used for JSON schema generation.
schemars = { version = "0.8.22", optional = true }
serde_json = { version = "1.0.140", optional = true }
strum = { version = "0.27.1", features = ["derive"], optional = true }
[target.'cfg(unix)'.dependencies]
libc = "0.2.171"
[target.'cfg(target_os = "linux")'.dependencies]
rustix = { version = "1.0.3", features = ["fs", "param"] }
[target.'cfg(target_os = "macos")'.dependencies]
core-foundation = "0.10.0"
mach2 = "0.4.2"
[target.'cfg(target_os = "windows")'.dependencies]
windows = { version = "0.61.1", features = [
"Win32_Foundation",
"Win32_Security",
"Win32_Storage_FileSystem",
"Win32_System_IO",
"Win32_System_Ioctl",
"Win32_System_ProcessStatus",
"Win32_System_Threading",
] }
[target.'cfg(target_os = "freebsd")'.dependencies]
serde_json = { version = "1.0.140" }
sysctl = { version = "0.6.0" }
filedescriptor = "0.8.3"
[dev-dependencies]
assert_cmd = "2.0.16"
cargo-husky = { version = "1.5.0", default-features = false, features = ["user-hooks"] }
predicates = "3.1.3"
tempfile = "3.19.1"
[target.'cfg(all(target_arch = "x86_64", target_os = "linux"))'.dev-dependencies]
portable-pty = "0.9.0"
[build-dependencies]
clap = { version = "4.5.32", features = ["default", "cargo", "wrap_help", "derive"] }
clap_complete = "4.5.47"
clap_complete_nushell = "4.5.5"
clap_complete_fig = "4.5.2"
clap_mangen = "0.2.26"
indoc = "2.0.6"
# Compile dependencies with optimizations enabled, even in debug mode.
[profile.dev.package."*"]
opt-level = 3
[profile.release]
debug = 0
@ -67,94 +166,6 @@ inherits = "release"
debug = true
strip = false
[features]
battery = ["starship-battery"]
nvidia = ["nvml-wrapper"]
gpu = ["nvidia"]
zfs = []
deploy = ["battery", "gpu", "zfs"]
default = ["deploy"]
logging = ["fern", "log", "time"]
generate_schema = ["schemars", "serde_json", "strum"]
[dependencies]
anyhow = "1.0.86"
backtrace = "0.3.73"
cfg-if = "1.0.0"
clap = { version = "4.5.13", features = ["default", "cargo", "wrap_help", "derive"] }
concat-string = "1.0.1"
crossterm = "0.27.0"
ctrlc = { version = "3.4.4", features = ["termination"] }
dirs = "5.0.1"
hashbrown = "0.14.5"
humantime = "2.1.0"
indexmap = "2.2.6"
indoc = "2.0.5"
itertools = "0.13.0"
nvml-wrapper = { version = "0.10.0", optional = true, features = ["legacy-functions"] }
regex = "1.10.5"
serde = { version = "1.0.204", features = ["derive"] }
starship-battery = { version = "0.9.1", optional = true }
sysinfo = "=0.30.13"
toml_edit = { version = "0.22.17", features = ["serde"] }
tui = { version = "0.27.0", package = "ratatui" }
unicode-ellipsis = "0.2.0"
unicode-segmentation = "1.11.0"
unicode-width = "0.1.13"
# Used for logging.
fern = { version = "0.6.2", optional = true }
log = { version = "0.4.22", optional = true }
time = { version = "0.3.36", features = ["local-offset", "formatting", "macros"], optional = true }
# These are just used for JSON schema generation.
schemars = { version = "0.8.21", optional = true }
serde_json = { version = "1.0.120", optional = true }
strum = { version = "0.26.3", features = ["derive"], optional = true }
[target.'cfg(unix)'.dependencies]
libc = "0.2.155"
[target.'cfg(target_os = "linux")'.dependencies]
rustix = { version = "0.38.34", features = ["fs", "param"] }
[target.'cfg(target_os = "macos")'.dependencies]
core-foundation = "0.9.4"
mach2 = "0.4.2"
[target.'cfg(target_os = "windows")'.dependencies]
windows = { version = "0.58.0", features = [
"Win32_Foundation",
"Win32_Security",
"Win32_Storage_FileSystem",
"Win32_System_IO",
"Win32_System_Ioctl",
"Win32_System_ProcessStatus",
"Win32_System_Threading",
] }
[target.'cfg(target_os = "freebsd")'.dependencies]
serde_json = { version = "1.0.120" }
sysctl = { version = "0.5.5" }
filedescriptor = "0.8.2"
[dev-dependencies]
assert_cmd = "2.0.15"
cargo-husky = { version = "1.5.0", default-features = false, features = ["user-hooks"] }
predicates = "3.1.0"
[target.'cfg(all(target_arch = "x86_64", target_os = "linux"))'.dev-dependencies]
portable-pty = "0.8.1"
[build-dependencies]
clap = { version = "4.5.13", features = ["default", "cargo", "wrap_help", "derive"] }
clap_complete = "4.5.12"
clap_complete_nushell = "4.5.3"
clap_complete_fig = "4.5.2"
clap_mangen = "0.2.23"
indoc = "2.0.5"
[package.metadata.deb]
section = "utility"
assets = [
@ -210,6 +221,7 @@ depends = "libc6:armhf (>= 2.28)"
[package.metadata.wix]
output = "bottom_x86_64_installer.msi"
[package.metadata.generate-rpm]
assets = [
{ source = "target/release/btm", dest = "/usr/bin/", mode = "755" },

166
README.md
View File

@ -2,7 +2,7 @@
<h1>bottom (btm)</h1>
<p>
A customizable cross-platform graphical process/system monitor for the terminal.<br />Supports Linux, macOS, and Windows. Inspired by <a href=https://github.com/aksakalli/gtop>gtop</a>, <a href=https://github.com/xxxserxxx/gotop>gotop</a>, and <a href=https://github.com/htop-dev/htop/>htop</a>.
A customizable cross-platform graphical process/system monitor for the terminal.<br />Supports Linux, macOS, and Windows. Inspired by <a href=https://github.com/aksakalli/gtop>gtop</a>, <a href=https://github.com/xxxserxxx/gotop>gotop</a>, and <a href=https://github.com/htop-dev/htop>htop</a>.
</p>
[<img src="https://img.shields.io/github/actions/workflow/status/ClementTsang/bottom/ci.yml?branch=main&style=flat-square&logo=github" alt="CI status">](https://github.com/ClementTsang/bottom/actions?query=branch%main)
@ -16,7 +16,7 @@
<img src="assets/demo.gif" alt="Quick demo recording showing off bottom's searching, expanding, and process killing."/>
<p>
<sub>
Demo using the <a href="https://github.com/morhetz/gruvbox">Gruvbox</a> theme (<code>--color gruvbox</code>), along with <a href="https://www.ibm.com/plex/">IBM Plex Mono</a> and <a href="https://sw.kovidgoyal.net/kitty/">Kitty</a>
Demo using the <a href="https://github.com/morhetz/gruvbox">Gruvbox</a> theme (<code>--theme gruvbox</code>), along with <a href="https://www.ibm.com/plex/">IBM Plex Mono</a> and <a href="https://sw.kovidgoyal.net/kitty/">Kitty</a>
</sub>
</p>
</div>
@ -29,12 +29,14 @@
- [Unofficial](#unofficial)
- [Installation](#installation)
- [Cargo](#cargo)
- [Alpine](#alpine)
- [Arch Linux](#arch-linux)
- [Debian / Ubuntu](#debian--ubuntu)
- [Exherbo Linux](#exherbo-linux)
- [Fedora / CentOS / AlmaLinux / Rocky Linux](#fedora--centos--almalinux--rocky-linux)
- [Gentoo](#gentoo)
- [Nix](#nix)
- [openSUSE](#opensuse)
- [Snap](#snap)
- [Solus](#solus)
- [Void](#void)
@ -44,6 +46,7 @@
- [Scoop](#scoop)
- [winget](#winget)
- [Windows installer](#windows-installer)
- [Conda](#conda)
- [Pre-built binaries](#pre-built-binaries)
- [Auto-completion](#auto-completion)
- [Usage](#usage)
@ -85,7 +88,7 @@ As (yet another) process/system visualization and management application, bottom
- Changing the layout of widgets
- Filtering out entries in some widgets
- Some other nice stuff, like:
- And more:
- [An htop-inspired basic mode](https://clementtsang.github.io/bottom/nightly/usage/basic-mode/)
- [Expansion, which focuses on just one widget](https://clementtsang.github.io/bottom/nightly/usage/general-usage/#expansion)
@ -119,7 +122,7 @@ bottom may work on a number of platforms that aren't officially supported. Note
Note that some unsupported platforms may eventually be officially supported (e.g., FreeBSD).
A non-comprehensive list of some currently unofficially supported platforms that may compile/work include:
A non-comprehensive list of some currently unofficially-supported platforms that may compile/work include:
- FreeBSD (`x86_64`)
- Linux (`armv6`, `armv7`, `powerpc64le`, `riscv64gc`)
@ -151,7 +154,7 @@ cargo +stable install bottom --locked
cargo install bottom
```
Alternatively, if you can use `cargo install` using the repo as the source.
Alternatively, you can use `cargo install` using the repo as the source.
```bash
# You might need to update the stable version of Rust first.
@ -171,11 +174,21 @@ cargo install --path . --locked
# Option 3 - Install using cargo with the repo as the source
cargo install --git https://github.com/ClementTsang/bottom --locked
# You can also pass in the target-cpu=native flag for
# better CPU-specific optimizations. For example:
# You can also pass in the target-cpu=native flag to try to
# use better CPU-specific optimizations. For example:
RUSTFLAGS="-C target-cpu=native" cargo install --path . --locked
```
### Alpine
bottom is available as a [package](https://pkgs.alpinelinux.org/packages?name=bottom&branch=edge&repo=&arch=&origin=&flagged=&maintainer=) for Alpine Linux via `apk`:
```bash
apk add bottom
```
Packages for documentation ([`bottom-doc`](https://pkgs.alpinelinux.org/packages?name=bottom-doc&branch=edge&repo=&arch=&origin=&flagged=&maintainer=)) and completions for Bash ([`bottom-bash-completion`](https://pkgs.alpinelinux.org/packages?name=bottom-bash-completion&branch=edge&repo=&arch=&origin=&flagged=&maintainer=)), Fish ([`bottom-fish-completion`](https://pkgs.alpinelinux.org/packages?name=bottom-fish-completion&branch=edge&repo=&arch=&origin=&flagged=&maintainer=)), and Zsh ([`bottom-zsh-completion`](https://pkgs.alpinelinux.org/packages?name=bottom-zsh-completion&branch=edge&repo=&arch=&origin=&flagged=&maintainer=)) are also available.
### Arch Linux
bottom is available as an [official package](https://archlinux.org/packages/extra/x86_64/bottom/) that can be installed with `pacman`:
@ -184,31 +197,38 @@ bottom is available as an [official package](https://archlinux.org/packages/extr
sudo pacman -S bottom
```
If you want the latest changes that are not yet stable, you can also install `bottom-git` [from the AUR](https://aur.archlinux.org/packages/bottom-git).
For example, to install with `paru`:
If you want the latest changes that are not yet stable, you can also install `bottom-git` [from the AUR](https://aur.archlinux.org/packages/bottom-git):
```bash
sudo paru -S bottom-git
# Using paru
paru -S bottom-git
# Using yay
yay -S bottom-git
```
### Debian / Ubuntu
A `.deb` file is provided on each [stable release](https://github.com/ClementTsang/bottom/releases/latest) and
[nightly builds](https://github.com/ClementTsang/bottom/releases/tag/nightly) for x86, aarch64, and armv7
(note stable ARM builds are only available for 0.6.8 and later). An example of installing this way:
[nightly builds](https://github.com/ClementTsang/bottom/releases/tag/nightly) for x86, aarch64, and armv7.
Some examples of installing it this way:
```bash
# x86-64
curl -LO https://github.com/ClementTsang/bottom/releases/download/0.10.2/bottom_0.10.2_amd64.deb
sudo dpkg -i bottom_0.10.2_amd64.deb
curl -LO https://github.com/ClementTsang/bottom/releases/download/0.10.2/bottom_0.10.2-1_amd64.deb
sudo dpkg -i bottom_0.10.2-1_amd64.deb
# ARM64
curl -LO https://github.com/ClementTsang/bottom/releases/download/0.10.2/bottom_0.10.2_arm64.deb
sudo dpkg -i bottom_0.10.2_arm64.deb
curl -LO https://github.com/ClementTsang/bottom/releases/download/0.10.2/bottom_0.10.2-1_arm64.deb
sudo dpkg -i bottom_0.10.2-1_arm64.deb
# ARM
curl -LO https://github.com/ClementTsang/bottom/releases/download/0.10.2/bottom_0.10.2_armhf.deb
sudo dpkg -i bottom_0.10.2_armhf.deb
curl -LO https://github.com/ClementTsang/bottom/releases/download/0.10.2/bottom_0.10.2-1_armhf.deb
sudo dpkg -i bottom_0.10.2-1_armhf.deb
# musl-based
curl -LO https://github.com/ClementTsang/bottom/releases/download/0.10.2/bottom-musl_0.10.2-1_amd64.deb
sudo dpkg -i bottom-musl_0.10.2-1_amd64.deb
```
### Exherbo Linux
@ -222,13 +242,20 @@ cave resolve -x bottom
### Fedora / CentOS / AlmaLinux / Rocky Linux
bottom is available in [COPR](https://copr.fedorainfracloud.org/coprs/atim/bottom/):
bottom is available on [COPR](https://copr.fedorainfracloud.org/coprs/atim/bottom/):
```bash
sudo dnf copr enable atim/bottom -y
sudo dnf install bottom
```
bottom is also available via [Terra](https://terra.fyralabs.com/):
```bash
sudo dnf install --repofrompath 'terra,https://repos.fyralabs.com/terra$releasever' --setopt='terra.gpgkey=https://repos.fyralabs.com/terra$releasever/key.asc' terra-release
sudo dnf install bottom
```
`.rpm` files are also generated for x86 in the [releases](https://github.com/ClementTsang/bottom/releases) page.
For example:
@ -247,10 +274,26 @@ sudo emerge --ask sys-process/bottom
### Nix
Available [in the nix-community repo](https://github.com/nix-community/home-manager/blob/master/modules/programs/bottom.nix):
Available [in Nixpkgs](https://search.nixos.org/packages?channel=unstable&show=bottom&from=0&size=1&sort=relevance&type=packages) as `bottom`:
```bash
nix-env -i bottom
nix profile install nixpkgs#bottom
```
`bottom` can also be installed and configured through the [home-manager](https://nix-community.github.io/home-manager) module:
```nix
{
programs.bottom.enable = true;
}
```
### openSUSE
Available in openSUSE Tumbleweed:
```bash
zypper in bottom
```
### Snap
@ -334,6 +377,19 @@ You can uninstall via Control Panel, Options, or `winget --uninstall bottom`.
You can also manually install bottom as a Windows program by going to the [latest release](https://github.com/ClementTsang/bottom/releases/latest)
and installing via the `.msi` file.
### Conda
You can install bottom using `conda` with [this conda-smithy repository](https://github.com/conda-forge/bottom-feedstock):
```bash
# Add the channel
conda config --add channels conda-forge
conda config --set channel_priority strict
# Install
conda install bottom
```
### Pre-built binaries
You can also use the pre-built release binaries:
@ -351,37 +407,40 @@ or by installing to your system following the procedures for installing binaries
#### Auto-completion
The release binaries are packaged with shell auto-completion files for bash, fish, zsh, and Powershell. To install them:
The release binaries in [the releases page](https://github.com/ClementTsang/bottom/releases) are packaged with
shell auto-completion files for Bash, Zsh, fish, Powershell, Elvish, Fig, and Nushell. To install them:
- For bash, move `btm.bash` to `$XDG_CONFIG_HOME/bash_completion or /etc/bash_completion.d/`.
- For Bash, move `btm.bash` to `$XDG_CONFIG_HOME/bash_completion or /etc/bash_completion.d/`.
- For Zsh, move `_btm` to one of your `$fpath` directories.
- For fish, move `btm.fish` to `$HOME/.config/fish/completions/`.
- For zsh, move `_btm` to one of your `$fpath` directories.
- For PowerShell, add `_btm.ps1` to your PowerShell
[profile](<https://docs.microsoft.com/en-us/previous-versions//bb613488(v=vs.85)>).
- For PowerShell, add `_btm.ps1` to your PowerShell [profile](<https://docs.microsoft.com/en-us/previous-versions//bb613488(v=vs.85)>).
- For Elvish, the completion file is `btm.elv`.
- For Fig, the completion file is `btm.ts`.
- For Nushell, source `btm.nu`.
The individual auto-completion files are also included in the stable/nightly releases as `completion.tar.gz`.
The individual auto-completion files are also included in the stable/nightly releases as `completion.tar.gz` if needed.
## Usage
You can run bottom using `btm`.
- For help on flags, use `btm -h` for a quick overview or `btm --help` for more details.
- For info on key and mouse bindings, press `?` inside bottom or refer to the [documentation](https://clementtsang.github.io/bottom/nightly/).
- For info on key and mouse bindings, press `?` inside bottom or refer to the [documentation page](https://clementtsang.github.io/bottom/nightly/).
You can find more information on usage in the [documentation](https://clementtsang.github.io/bottom/nightly/).
## Configuration
bottom accepts a number of command-line arguments to change the behaviour of the application as desired. Additionally, bottom will automatically
generate a configuration file on the first launch, which one can change as appropriate.
bottom accepts a number of command-line arguments to change the behaviour of the application as desired.
Additionally, bottom will automatically generate a configuration file on the first launch, which can be changed.
More details on configuration can be found [in the documentation](https://clementtsang.github.io/bottom/nightly/configuration/config-file/).
## Troubleshooting
If some things aren't working, give the [troubleshooting page](https://clementtsang.github.io/bottom/nightly/troubleshooting) a look.
If things still aren't working, then consider opening [a question](https://github.com/ClementTsang/bottom/discussions)
or filing a [bug report](https://github.com/ClementTsang/bottom/issues/new/choose).
If some things aren't working, give the [troubleshooting page](https://clementtsang.github.io/bottom/nightly/troubleshooting)
a look. If things still aren't working, then consider asking [a question](https://github.com/ClementTsang/bottom/discussions)
or filing a [bug report](https://github.com/ClementTsang/bottom/issues/new/choose) if you think it's a bug.
## Contribution
@ -413,7 +472,7 @@ Thanks to all contributors:
<td align="center" valign="top" width="14.28%"><a href="http://hamberg.no/erlend"><img src="https://avatars3.githubusercontent.com/u/16063?v=4?s=100" width="100px;" alt="Erlend Hamberg"/><br /><sub><b>Erlend Hamberg</b></sub></a><br /><a href="https://github.com/ClementTsang/bottom/commits?author=ehamberg" title="Code">💻</a></td>
<td align="center" valign="top" width="14.28%"><a href="https://onee3.org"><img src="https://avatars.githubusercontent.com/u/4507647?v=4?s=100" width="100px;" alt="Frederick Zhang"/><br /><sub><b>Frederick Zhang</b></sub></a><br /><a href="https://github.com/ClementTsang/bottom/commits?author=Frederick888" title="Code">💻</a></td>
<td align="center" valign="top" width="14.28%"><a href="https://github.com/pvanheus"><img src="https://avatars.githubusercontent.com/u/4154788?v=4?s=100" width="100px;" alt="pvanheus"/><br /><sub><b>pvanheus</b></sub></a><br /><a href="https://github.com/ClementTsang/bottom/commits?author=pvanheus" title="Code">💻</a></td>
<td align="center" valign="top" width="14.28%"><a href="https://zebulon.dev/"><img src="https://avatars.githubusercontent.com/u/14242997?v=4?s=100" width="100px;" alt="Zeb Piasecki"/><br /><sub><b>Zeb Piasecki</b></sub></a><br /><a href="https://github.com/ClementTsang/bottom/commits?author=zebp" title="Code">💻</a></td>
<td align="center" valign="top" width="14.28%"><a href="https://zebulon.dev/"><img src="https://avatars.githubusercontent.com/u/14242997?v=4?s=100" width="100px;" alt="Zeb Piasecki"/><br /><sub><b>Zeb Piasecki</b></sub></a><br /><a href="https://github.com/ClementTsang/bottom/commits?author=vlakreeh" title="Code">💻</a></td>
<td align="center" valign="top" width="14.28%"><a href="https://github.com/briandipalma"><img src="https://avatars.githubusercontent.com/u/1597820?v=4?s=100" width="100px;" alt="Brian Di Palma"/><br /><sub><b>Brian Di Palma</b></sub></a><br /><a href="https://github.com/ClementTsang/bottom/commits?author=briandipalma" title="Documentation">📖</a></td>
</tr>
<tr>
@ -450,7 +509,7 @@ Thanks to all contributors:
<td align="center" valign="top" width="14.28%"><a href="https://github.com/spital"><img src="https://avatars.githubusercontent.com/u/11034264?v=4?s=100" width="100px;" alt="spital"/><br /><sub><b>spital</b></sub></a><br /><a href="https://github.com/ClementTsang/bottom/commits?author=spital" title="Code">💻</a></td>
<td align="center" valign="top" width="14.28%"><a href="https://bikodbg.com/"><img src="https://avatars.githubusercontent.com/u/1389811?v=4?s=100" width="100px;" alt="Michael Bikovitsky"/><br /><sub><b>Michael Bikovitsky</b></sub></a><br /><a href="https://github.com/ClementTsang/bottom/commits?author=mbikovitsky" title="Code">💻</a></td>
<td align="center" valign="top" width="14.28%"><a href="https://github.com/dvalter"><img src="https://avatars.githubusercontent.com/u/38795282?v=4?s=100" width="100px;" alt="Dmitry Valter"/><br /><sub><b>Dmitry Valter</b></sub></a><br /><a href="https://github.com/ClementTsang/bottom/commits?author=dvalter" title="Code">💻</a></td>
<td align="center" valign="top" width="14.28%"><a href="https://github.com/aragonnetje6"><img src="https://avatars.githubusercontent.com/u/69118097?v=4?s=100" width="100px;" alt="Twan Stok"/><br /><sub><b>Twan Stok</b></sub></a><br /><a href="https://github.com/ClementTsang/bottom/commits?author=aragonnetje6" title="Code">💻</a></td>
<td align="center" valign="top" width="14.28%"><a href="https://github.com/aragonnetje6"><img src="https://avatars.githubusercontent.com/u/69118097?v=4?s=100" width="100px;" alt="Grace Stok"/><br /><sub><b>Grace Stok</b></sub></a><br /><a href="https://github.com/ClementTsang/bottom/commits?author=aragonnetje6" title="Code">💻</a></td>
</tr>
<tr>
<td align="center" valign="top" width="14.28%"><a href="https://github.com/yshui"><img src="https://avatars.githubusercontent.com/u/366851?v=4?s=100" width="100px;" alt="Yuxuan Shui"/><br /><sub><b>Yuxuan Shui</b></sub></a><br /><a href="https://github.com/ClementTsang/bottom/commits?author=yshui" title="Code">💻</a></td>
@ -466,6 +525,22 @@ Thanks to all contributors:
<td align="center" valign="top" width="14.28%"><a href="https://github.com/MichalBryxi"><img src="https://avatars.githubusercontent.com/u/847473?v=4?s=100" width="100px;" alt="Michal Bryxí"/><br /><sub><b>Michal Bryxí</b></sub></a><br /><a href="https://github.com/ClementTsang/bottom/commits?author=MichalBryxi" title="Documentation">📖</a></td>
<td align="center" valign="top" width="14.28%"><a href="http://mpia.de/~hviding/"><img src="https://avatars.githubusercontent.com/u/17031860?v=4?s=100" width="100px;" alt="Raphael Erik Hviding"/><br /><sub><b>Raphael Erik Hviding</b></sub></a><br /><a href="https://github.com/ClementTsang/bottom/commits?author=TheSkyentist" title="Code">💻</a></td>
<td align="center" valign="top" width="14.28%"><a href="http://cosmichorror.dev"><img src="https://avatars.githubusercontent.com/u/30302768?v=4?s=100" width="100px;" alt="CosmicHorror"/><br /><sub><b>CosmicHorror</b></sub></a><br /><a href="https://github.com/ClementTsang/bottom/commits?author=CosmicHorrorDev" title="Code">💻</a></td>
<td align="center" valign="top" width="14.28%"><a href="https://www.woods.am/"><img src="https://avatars.githubusercontent.com/u/7113557?v=4?s=100" width="100px;" alt="Ben Woods"/><br /><sub><b>Ben Woods</b></sub></a><br /><a href="https://github.com/ClementTsang/bottom/commits?author=woodsb02" title="Documentation">📖</a></td>
<td align="center" valign="top" width="14.28%"><a href="http://cgdct.moe"><img src="https://avatars.githubusercontent.com/u/20411956?v=4?s=100" width="100px;" alt="Stephen Huan"/><br /><sub><b>Stephen Huan</b></sub></a><br /><a href="https://github.com/ClementTsang/bottom/commits?author=stephen-huan" title="Code">💻</a></td>
<td align="center" valign="top" width="14.28%"><a href="https://github.com/jasongwartz"><img src="https://avatars.githubusercontent.com/u/10981911?v=4?s=100" width="100px;" alt="Jason Gwartz"/><br /><sub><b>Jason Gwartz</b></sub></a><br /><a href="https://github.com/ClementTsang/bottom/commits?author=jasongwartz" title="Documentation">📖</a></td>
</tr>
<tr>
<td align="center" valign="top" width="14.28%"><a href="https://github.com/llc0930"><img src="https://avatars.githubusercontent.com/u/14966910?v=4?s=100" width="100px;" alt="llc0930"/><br /><sub><b>llc0930</b></sub></a><br /><a href="https://github.com/ClementTsang/bottom/commits?author=llc0930" title="Code">💻</a></td>
<td align="center" valign="top" width="14.28%"><a href="https://chronovore.dev"><img src="https://avatars.githubusercontent.com/u/614231?v=4?s=100" width="100px;" alt="Ada Ahmed"/><br /><sub><b>Ada Ahmed</b></sub></a><br /><a href="https://github.com/ClementTsang/bottom/commits?author=yretenai" title="Code">💻</a></td>
<td align="center" valign="top" width="14.28%"><a href="https://github.com/Wateir"><img src="https://avatars.githubusercontent.com/u/78731687?v=4?s=100" width="100px;" alt="Wateir"/><br /><sub><b>Wateir</b></sub></a><br /><a href="https://github.com/ClementTsang/bottom/commits?author=Wateir" title="Documentation">📖</a></td>
<td align="center" valign="top" width="14.28%"><a href="https://github.com/al42and"><img src="https://avatars.githubusercontent.com/u/933873?v=4?s=100" width="100px;" alt="Andrey Alekseenko"/><br /><sub><b>Andrey Alekseenko</b></sub></a><br /><a href="https://github.com/ClementTsang/bottom/commits?author=al42and" title="Code">💻</a></td>
<td align="center" valign="top" width="14.28%"><a href="http://fgimian.github.io/"><img src="https://avatars.githubusercontent.com/u/1811813?v=4?s=100" width="100px;" alt="Fotis Gimian"/><br /><sub><b>Fotis Gimian</b></sub></a><br /><a href="https://github.com/ClementTsang/bottom/commits?author=fgimian" title="Code">💻</a> <a href="https://github.com/ClementTsang/bottom/commits?author=fgimian" title="Documentation">📖</a></td>
<td align="center" valign="top" width="14.28%"><a href="https://sigmasquadron.net"><img src="https://avatars.githubusercontent.com/u/174749595?v=4?s=100" width="100px;" alt="Fernando Rodrigues"/><br /><sub><b>Fernando Rodrigues</b></sub></a><br /><a href="https://github.com/ClementTsang/bottom/commits?author=SigmaSquadron" title="Documentation">📖</a></td>
<td align="center" valign="top" width="14.28%"><a href="https://mtoohey.com"><img src="https://avatars.githubusercontent.com/u/36740602?v=4?s=100" width="100px;" alt="Matthew Toohey"/><br /><sub><b>Matthew Toohey</b></sub></a><br /><a href="https://github.com/ClementTsang/bottom/commits?author=mtoohey31" title="Code">💻</a></td>
</tr>
<tr>
<td align="center" valign="top" width="14.28%"><a href="https://meander.site"><img src="https://avatars.githubusercontent.com/u/11584387?v=4?s=100" width="100px;" alt="Julius Enriquez"/><br /><sub><b>Julius Enriquez</b></sub></a><br /><a href="https://github.com/ClementTsang/bottom/commits?author=win8linux" title="Documentation">📖</a></td>
<td align="center" valign="top" width="14.28%"><a href="https://github.com/benjamb"><img src="https://avatars.githubusercontent.com/u/8291297?v=4?s=100" width="100px;" alt="Ben Brown"/><br /><sub><b>Ben Brown</b></sub></a><br /><a href="https://github.com/ClementTsang/bottom/commits?author=benjamb" title="Code">💻</a></td>
</tr>
</tbody>
</table>
@ -480,8 +555,21 @@ Thanks to all contributors:
- This project is very much inspired by [gotop](https://github.com/xxxserxxx/gotop),
[gtop](https://github.com/aksakalli/gtop), and [htop](https://github.com/htop-dev/htop/).
- This application was written with many, _many_ libraries, and built on the
work of many talented people. This application would be impossible without their
work. I used to thank them all individually but the list got too large...
- This application was written with [many](https://github.com/ClementTsang/bottom/blob/main/Cargo.toml),
[_many_ libraries](https://github.com/ClementTsang/bottom/blob/main/Cargo.lock), as well as many services and
programs, all built on top of the work of many talented people. bottom would not exist without all of this.
- And of course, another round of thanks to all the contributors and package maintainers!
- And of course, thank you again to all contributors and package maintainers!
- I also really appreciate anyone who has used bottom, and those
who go out of their way to report bugs or suggest ways to improve things. I hope
it's been a useful tool for others.
- To those who support my work financially via donations, thank you so much.
- Also thanks to JetBrains for providing access to tools that I use to develop bottom
as part of their [open source support program](https://jb.gg/OpenSourceSupport).
<a href="https://jb.gg/OpenSourceSupport">
<img src="https://resources.jetbrains.com/storage/products/company/brand/logos/jetbrains.svg" alt="JetBrains logo" width="150" />
</a>

View File

@ -1,4 +1,6 @@
#[allow(dead_code)]
//! General build script used by bottom to generate completion files and set binary version.
#[expect(dead_code)]
#[path = "src/options/args.rs"]
mod args;
@ -8,22 +10,20 @@ use std::{
};
use clap::{Command, CommandFactory};
use clap_complete::{generate_to, shells::Shell, Generator};
use clap_complete::{Generator, generate_to, shells::Shell};
use clap_complete_fig::Fig;
use clap_complete_nushell::Nushell;
use crate::args::BottomArgs;
fn create_dir(dir: &Path) -> io::Result<()> {
let res = fs::create_dir_all(dir);
match &res {
Ok(()) => {}
Err(err) => {
eprintln!("Failed to create a directory at location {dir:?}, encountered error {err:?}. Aborting...",);
}
}
res
fs::create_dir_all(dir).inspect_err(|err| {
eprintln!(
"Couldn't create a directory at {} ({:?}). Aborting.",
dir.display(),
err
)
})
}
fn generate_completions<G>(to_generate: G, cmd: &mut Command, out_dir: &Path) -> io::Result<PathBuf>
@ -38,11 +38,12 @@ fn btm_generate() -> io::Result<()> {
match env::var_os(ENV_KEY) {
Some(var) if !var.is_empty() => {
const COMPLETION_DIR: &str = "./target/tmp/bottom/completion/";
const MANPAGE_DIR: &str = "./target/tmp/bottom/manpage/";
let completion_dir =
option_env!("COMPLETION_DIR").unwrap_or("./target/tmp/bottom/completion/");
let manpage_dir = option_env!("MANPAGE_DIR").unwrap_or("./target/tmp/bottom/manpage/");
let completion_out_dir = PathBuf::from(COMPLETION_DIR);
let manpage_out_dir = PathBuf::from(MANPAGE_DIR);
let completion_out_dir = PathBuf::from(completion_dir);
let manpage_out_dir = PathBuf::from(manpage_dir);
create_dir(&completion_out_dir)?;
create_dir(&manpage_out_dir)?;

View File

@ -7,7 +7,8 @@ Documentation is currently built using Python 3.11, though it should work fine w
## Running locally
One way is to just run `serve.sh`. Alternatively, the manual steps are:
One way is to just run `serve.sh`. Alternatively, the manual steps are, assuming your current working directory
is the bottom repo:
```bash
# Change directories to the documentation.
@ -26,16 +27,17 @@ venv/bin/mkdocs serve
## Deploying
Deploying is done via [mike](https://github.com/jimporter/mike).
Deploying is done via [mike](https://github.com/jimporter/mike) in order to get versioning. Typically,
this is done through CI, but can be done manually if needed.
### Nightly
### Nightly docs
```bash
cd docs
mike deploy nightly --push
```
### Stable
### Stable docs
```bash
cd docs

View File

@ -30,7 +30,7 @@ see information on these options by running `btm -h`, or run `btm --help` to dis
| `-S, --case_sensitive` | Enables case sensitivity by default. |
| `-u, --current_usage` | Calculates process CPU usage as a percentage of current usage rather than total usage. |
| `--disable_advanced_kill` | Hides additional stopping options Unix-like systems. |
| `-g, --group_processes` | Groups processes with the same name by default. |
| `-g, --group_processes` | Groups processes with the same name by default. No effect if `--tree` is set. |
| `--process_memory_as_value` | Defaults to showing process memory usage by value. |
| `--process_command` | Shows the full command name instead of the process name by default. |
| `-R, --regex` | Enables regex by default while searching. |
@ -79,9 +79,9 @@ see information on these options by running `btm -h`, or run `btm --help` to dis
## GPU Options
| Option | Behaviour |
| -------------- | ------------------------------------------- |
| `--enable_gpu` | Enable collecting and displaying GPU usage. |
| Option | Behaviour |
| --------------- | ----------------------------------------------------------------- |
| `--disable_gpu` | Disable collecting and displaying NVIDIA and AMD GPU information. |
## Style Options

View File

@ -44,7 +44,7 @@ each time:
| `network_use_binary_prefix` | Boolean | Displays the network widget with binary prefixes. |
| `network_use_bytes` | Boolean | Displays the network widget using bytes. |
| `network_use_log` | Boolean | Displays the network widget with a log scale. |
| `enable_gpu` | Boolean | Shows the GPU widgets. |
| `disable_gpu` | Boolean | Disable NVIDIA and AMD GPU data collection. |
| `retention` | String (human readable time, such as "10m", "1h", etc.) | How much data is stored at once in terms of time. |
| `unnormalized_cpu` | Boolean | Show process CPU% without normalizing over the number of cores. |
| `expanded` | Boolean | Expand the default widget upon starting the app. |

View File

@ -6,11 +6,11 @@ For persistent configuration, and for certain configuration options, bottom supp
If no config file argument is given, it will automatically look for a config file at these locations:
| OS | Default Config Location |
| ------- | -------------------------------------------------------------------------------------------------------------------------------------- |
| macOS | `$HOME/Library/Application Support/bottom/bottom.toml`<br/> `~/.config/bottom/bottom.toml` <br/> `$XDG_CONFIG_HOME/bottom/bottom.toml` |
| Linux | `~/.config/bottom/bottom.toml` <br/> `$XDG_CONFIG_HOME/bottom/bottom.toml` |
| Windows | `C:\Users\<USER>\AppData\Roaming\bottom\bottom.toml` |
| OS | Default Config Location |
| ------- | ------------------------------------------------------------------------------------------------------------------------------------------ |
| macOS | `$HOME/Library/Application Support/bottom/bottom.toml`<br/> `$HOME/.config/bottom/bottom.toml` <br/> `$XDG_CONFIG_HOME/bottom/bottom.toml` |
| Linux | `$HOME/.config/bottom/bottom.toml` <br/> `$XDG_CONFIG_HOME/bottom/bottom.toml` |
| Windows | `C:\Users\<USER>\AppData\Roaming\bottom\bottom.toml` |
If the config file doesn't exist at the path, bottom will automatically try to create a new config file at the location
with default values.

View File

@ -12,7 +12,7 @@ There are a few areas where documentation changes are often needed:
- The [`README.md`](https://github.com/ClementTsang/bottom/blob/main/README.md)
- The help menu inside of the application (located [here](https://github.com/ClementTsang/bottom/blob/main/src/constants.rs))
- The [extended documentation](https://clementtsang.github.io/bottom/nightly/) (here)
- The [extended documentation](../index.md) (what you're reading right now)
- The [`CHANGELOG.md`](https://github.com/ClementTsang/bottom/blob/main/CHANGELOG.md)
## How should I add/update documentation?

View File

@ -54,6 +54,9 @@ This will automatically generate completion and manpage files in `target/tmp/bot
files, modify/delete either these files or set `BTM_GENERATE` to some other non-empty value to retrigger the build
script.
You may override the default diretories used to generate both completion and manpage files by specifying the
`COMPLETION_DIR` and `MANPAGE_DIR` environment variables respectively.
For more information, you may want to look at either the [`build.rs`](https://github.com/ClementTsang/bottom/blob/main/build.rs)
file or the [binary build CI workflow](https://github.com/ClementTsang/bottom/blob/main/.github/workflows/build_releases.yml).

View File

@ -33,7 +33,7 @@ The command to run bottom is `btm`.
You can refer to the [usage](usage/general-usage.md) pages for more details on using bottom (e.g. keybinds, some features, a general overview of what each widget does).
To configure bottom (e.g. how it behaves, how it looks, etc.) refer to the [command-line options page](configuration/command-line-options.md) for temporary settings, or [the config file page](configuration/config-file) for more permanent settings.
To configure bottom (e.g. how it behaves, how it looks, etc.) refer to the [command-line options page](configuration/command-line-options.md) for temporary settings, or [the config file page](configuration/config-file/index.md) for more permanent settings.
## Contribution

View File

@ -23,18 +23,24 @@ Another (better) alternative is to install a font that supports braille fonts, a
For example, installing something like [UBraille](https://yudit.org/download/fonts/UBraille/) or [Iosevka](https://github.com/be5invis/Iosevka)
and ensuring your terminal uses it should work.
### Braille font issues on Linux/macOS/Unix-like
### Linux/macOS/Unix
Generally, the problem comes down to you either not having a font that supports the braille markers, or your terminal
emulator is not using the correct font for the braille markers.
If you're on a Unix-like system, generally, the problem comes down to you either not having a font that supports the
braille markers, or your terminal emulator is not using the correct font for the braille markers.
See [here](https://github.com/cjbassi/gotop/issues/18) for possible fixes if you're having font issues on Linux, which
may also be helpful for macOS or other Unix-like systems.
Some possible solutions include:
- Uninstalling `gnu-free-fonts` if installed, as that is known to cause problems with braille markers
- Installing a font like `ttf-symbola` or `ttf-ubraille` for your terminal emulator to try and automatically fall back to
- Configuring your terminal emulator to use specific fonts for the `U+2800` to `U+28FF` range.
- For example for kitty, do `symbol_map U+2800-U+28FF Symbola`.
See [this issue](https://github.com/cjbassi/gotop/issues/18) for more possible fixes.
If you're still having issues, feel free to open a [discussion](https://github.com/ClementTsang/bottom/discussions/new/)
question about it.
### Installing fonts for Windows Command Prompt/PowerShell
### Windows/Powershell
**Note: I would advise backing up your registry beforehand if you aren't sure what you are doing!**
@ -50,16 +56,16 @@ Let's say you're installing [Iosevka](https://github.com/be5invis/Iosevka). The
4. Here, add a new `String value`, and set the `Name` to a bunch of 0's (e.g. `000` - make sure the name isn't already used), then set the `Data` to the font name (e.g. `Iosevka`).
<figure>
<img src="../assets/screenshots/troubleshooting/regedit_fonts.webp" alt="Regedit menu showing how to add a new font for Command Prompt/PowerShell"/>
<figcaption><sub>The last entry is the new entry for Iosevka</sub></figcaption>
</figure>
<figure>
<img src="../assets/screenshots/troubleshooting/regedit_fonts.webp" alt="Regedit menu showing how to add a new font for Command Prompt/PowerShell"/>
<figcaption><sub>The last entry is the new entry for Iosevka</sub></figcaption>
</figure>
5. Then, open the Command Prompt/PowerShell, and right-click on the top bar, and open "Properties":
<figure>
<img src="../assets/screenshots/troubleshooting/cmd_prompt_props.webp" alt="Opening the properties menu in Command Prompt/PowerShell"/>
</figure>
<figure>
<img src="../assets/screenshots/troubleshooting/cmd_prompt_props.webp" alt="Opening the properties menu in Command Prompt/PowerShell"/>
</figure>
6. From here, go to "Font", and set the font to your new font (so in this example, Iosevka):

View File

@ -0,0 +1,14 @@
# Auto-Complete
The release binaries in [the releases page](https://github.com/ClementTsang/bottom/releases) are packaged with
shell auto-completion files for Bash, Zsh, fish, Powershell, Elvish, Fig, and Nushell. To install them:
- For Bash, move `btm.bash` to `$XDG_CONFIG_HOME/bash_completion or /etc/bash_completion.d/`.
- For Zsh, move `_btm` to one of your `$fpath` directories.
- For fish, move `btm.fish` to `$HOME/.config/fish/completions/`.
- For PowerShell, add `_btm.ps1` to your PowerShell [profile](<https://docs.microsoft.com/en-us/previous-versions//bb613488(v=vs.85)>).
- For Elvish, the completion file is `btm.elv`.
- For Fig, the completion file is `btm.ts`.
- For Nushell, source `btm.nu`.
The individual auto-completion files are also included in the stable/nightly releases as `completion.tar.gz` if needed.

View File

@ -13,7 +13,7 @@ If the total RAM or swap available is 0, then it is automatically hidden from th
One can also adjust the displayed time range through either the keyboard or mouse, with a range of 30s to 600s.
This widget can also be configured to display Nvidia GPU memory usage (`--enable_gpu` on Linux/Windows) or cache memory usage (`--enable_cache_memory`).
This widget can also be configured to display Nvidia and AMD GPU memory usage (`--disable_gpu` on Linux/Windows to disable) or cache memory usage (`--enable_cache_memory`).
## Key bindings

View File

@ -36,7 +36,7 @@ By default, the main process table displays the following information for each p
[here](https://docs.rs/sysinfo/latest/sysinfo/struct.Process.html#method.disk_usage)
for more details.
With the feature flag (`--enable_gpu` on Linux/Windows) and gpu process columns enabled in the configuration:
With the feature flag (`--disable_gpu` on Linux/Windows to disable) and gpu process columns enabled in the configuration:
- GPU memory use percentage
- GPU core utilization percentage
@ -64,7 +64,14 @@ is added together when displayed.
<img src="../../../assets/screenshots/process/process_grouped.webp" alt="A picture of grouped mode in a process widget."/>
</figure>
Note that the process state and user columns are disabled in this mode.
!!! info
Note that the process state and user columns are disabled in this mode.
!!! info
Note that if tree mode is also active, processes cannot be grouped together due to the behaviour of the two modes
somewhat clashing. This also reflects with default modes like `group_processes`.
### Process termination
@ -94,7 +101,10 @@ Pressing ++t++ or ++f5++ in the table toggles tree mode in the process widget, d
A process in tree mode can also be "collapsed", hiding its children and any descendants, using either the ++minus++ or ++plus++ keys, or double-clicking on an entry.
Lastly, note that in tree mode, processes cannot be grouped together due to the behaviour of the two modes somewhat clashing.
!!! info
Note that if tree mode is active, processes cannot be grouped together due to the behaviour of the two modes
somewhat clashing. This also reflects with default modes like `group_processes`.
### Full command

View File

@ -10,7 +10,7 @@ The temperature widget provides a table of temperature sensors and their current
The temperature widget provides the sensor name as well as its current temperature.
This widget can also be configured to display Nvidia GPU temperatures (`--enable_gpu` on Linux/Windows).
This widget can also be configured to display Nvidia and AMD GPU temperatures (`--disable_gpu` on Linux/Windows to disable).
## Key bindings

View File

@ -161,15 +161,17 @@ nav:
- "Disk Widget": usage/widgets/disk.md
- "Temperature Widget": usage/widgets/temperature.md
- "Battery Widget": usage/widgets/battery.md
- "Auto-Complete": usage/autocomplete.md
- "Configuration":
- "Command-line Options": configuration/command-line-options.md
- "Config File":
- configuration/config-file/index.md
- "Flags": configuration/config-file/flags.md
- "Styling": configuration/config-file/styling.md
- "Layout": configuration/config-file/layout.md
- "CPU Widget": configuration/config-file/cpu.md
- "Data Filtering": configuration/config-file/data-filtering.md
- "Processes": configuration/config-file/processes.md
- "Flags": configuration/config-file/flags.md
- "Layout": configuration/config-file/layout.md
- "Processes Widget": configuration/config-file/processes.md
- "Styling": configuration/config-file/styling.md
- "Contribution":
- "Issues, Pull Requests, and Discussions": contribution/issues-and-pull-requests.md
- "Documentation": contribution/documentation.md

View File

@ -1,5 +1,6 @@
mkdocs == 1.6.0
mkdocs-material == 9.5.31
mkdocs == 1.6.1
mkdocs-material == 9.6.9
mdx_truly_sane_lists == 1.3
mike == 2.1.2
mkdocs-git-revision-date-localized-plugin == 1.2.4
mike == 2.1.3
mkdocs-git-revision-date-localized-plugin == 1.4.5

View File

@ -5,6 +5,7 @@ fn_params_layout = "Compressed"
use_field_init_shorthand = true
tab_spaces = 4
max_width = 100
style_edition = "2024"
# Unstable options, disabled by default.
# imports_granularity = "Crate"

View File

@ -1,32 +1,42 @@
# This is a default config file for bottom. All of the settings are commented
# This is a default config file for bottom. All of the settings are commented
# out by default; if you wish to change them uncomment and modify as you see
# fit.
# This group of options represents a command-line option. Flags explicitly
# This group of options represents a command-line option. Flags explicitly
# added when running (ie: btm -a) will override this config file if an option
# is also set here.
[flags]
# Whether to hide the average cpu entry.
#hide_avg_cpu = false
# Whether to use dot markers rather than braille.
#dot_marker = false
# The update rate of the application.
#rate = "1s"
# Whether to put the CPU legend to the left.
#cpu_left_legend = false
# Whether to set CPU% on a process to be based on the total CPU or just current usage.
#current_usage = false
# Whether to set CPU% on a process to be based on the total CPU or per-core CPU% (not divided by the number of cpus).
#unnormalized_cpu = false
# Whether to group processes with the same name together by default.
# Whether to group processes with the same name together by default. Doesn't do anything
# if tree is set to true or --tree is set.
#group_processes = false
# Whether to make process searching case sensitive by default.
#case_sensitive = false
# Whether to make process searching look for matching the entire word by default.
#whole_word = false
# Whether to make process searching use regex by default.
#regex = false
# The temperature unit. One of the following, defaults to "c" for Celsius:
#temperature_type = "c"
##temperature_type = "k"
@ -34,101 +44,176 @@
##temperature_type = "kelvin"
##temperature_type = "fahrenheit"
##temperature_type = "celsius"
# The default time interval (in milliseconds).
#default_time_value = "60s"
# The time delta on each zoom in/out action (in milliseconds).
#time_delta = 15000
# Hides the time scale.
#hide_time = false
# Override layout default widget
#default_widget_type = "proc"
#default_widget_count = 1
# Expand selected widget upon starting the app
#expanded = true
# Use basic mode
#basic = false
# Use the old network legend style
#use_old_network_legend = false
# Remove space in tables
#hide_table_gap = false
# Show the battery widgets
#battery = false
# Disable mouse clicks
#disable_click = false
# Built-in themes. Valid values are "default", "default-light", "gruvbox", "gruvbox-light", "nord", "nord-light"
#color = "default"
# Show memory values in the processes widget as values by default
#process_memory_as_value = false
# Show tree mode by default in the processes widget.
#tree = false
# Shows an indicator in table widgets tracking where in the list you are.
#show_table_scroll_position = false
# Show processes as their commands by default in the process widget.
#process_command = false
# Displays the network widget with binary prefixes.
#network_use_binary_prefix = false
# Displays the network widget using bytes.
#network_use_bytes = false
# Displays the network widget with a log scale.
#network_use_log = false
# Hides advanced options to stop a process on Unix-like systems.
#disable_advanced_kill = false
# Shows GPU(s) memory
#enable_gpu = false
# Hide GPU(s) information
#disable_gpu = false
# Shows cache and buffer memory
#enable_cache_memory = false
# How much data is stored at once in terms of time.
#retention = "10m"
# Where to place the legend for the memory widget. One of "none", "top-left", "top", "top-right", "left", "right", "bottom-left", "bottom", "bottom-right".
#memory_legend = "top-right"
# Where to place the legend for the network widget. One of "none", "top-left", "top", "top-right", "left", "right", "bottom-left", "bottom", "bottom-right".
#network_legend = "top-right"
# Processes widget configuration
#[processes]
# The columns shown by the process widget. The following columns are supported:
# The columns shown by the process widget. The following columns are supported (the GPU columns are only available if the GPU feature is enabled when built):
# PID, Name, CPU%, Mem%, R/s, W/s, T.Read, T.Write, User, State, Time, GMem%, GPU%
#columns = ["PID", "Name", "CPU%", "Mem%", "R/s", "W/s", "T.Read", "T.Write", "User", "State", "GMem%", "GPU%"]
# CPU widget configuration
#[cpu]
# One of "all" (default), "average"/"avg"
# default = "average"
#default = "average"
# Disk widget configuration
#[disk]
#[name_filter]
# The columns shown by the process widget. The following columns are supported:
# Disk, Mount, Used, Free, Total, Used%, Free%, R/s, W/s
#columns = ["Disk", "Mount", "Used", "Free", "Total", "Used%", "R/s", "W/s"]
# By default, there are no disk name filters enabled. These can be turned on to filter out specific data entries if you
# don't want to see them. An example use case is provided below.
#[disk.name_filter]
# Whether to ignore any matches. Defaults to true.
#is_list_ignored = true
# A list of filters to try and match.
#list = ["/dev/sda\\d+", "/dev/nvme0n1p2"]
# Whether to use regex. Defaults to false.
#regex = true
# Whether to be case-sensitive. Defaults to false.
#case_sensitive = false
# Whether to be require matching the whole word. Defaults to false.
#whole_word = false
#[mount_filter]
# By default, there are no mount name filters enabled. An example use case is provided below.
#[disk.mount_filter]
# Whether to ignore any matches. Defaults to true.
#is_list_ignored = true
# A list of filters to try and match.
#list = ["/mnt/.*", "/boot"]
# Whether to use regex. Defaults to false.
#regex = true
# Whether to be case-sensitive. Defaults to false.
#case_sensitive = false
# Whether to be require matching the whole word. Defaults to false.
#whole_word = false
# Temperature widget configuration
#[temperature]
#[sensor_filter]
# By default, there are no temperature sensor filters enabled. An example use case is provided below.
#[temperature.sensor_filter]
# Whether to ignore any matches. Defaults to true.
#is_list_ignored = true
# A list of filters to try and match.
#list = ["cpu", "wifi"]
# Whether to use regex. Defaults to false.
#regex = false
# Whether to be case-sensitive. Defaults to false.
#case_sensitive = false
# Whether to be require matching the whole word. Defaults to false.
#whole_word = false
# Network widget configuration
#[network]
#[interface_filter]
# By default, there are no network interface filters enabled. An example use case is provided below.
#[network.interface_filter]
# Whether to ignore any matches. Defaults to true.
#is_list_ignored = true
# A list of filters to try and match.
#list = ["virbr0.*"]
# Whether to use regex. Defaults to false.
#regex = true
# Whether to be case-sensitive. Defaults to false.
#case_sensitive = false
# Whether to be require matching the whole word. Defaults to false.
#whole_word = false
# These are all the components that support custom theming. Note that colour support
# will depend on terminal support.
#[styles] # Uncomment if you want to use custom styling
# Built-in themes. Valid values are:
# - "default"
# - "default-light"

View File

@ -9,6 +9,8 @@ behind a feature flag to avoid building unnecessary code for release builds, and
cargo run --features="generate_schema" -- --generate_schema > schema/nightly/bottom.json
```
Alternatively, run the script in `scripts/schema/generate.sh`, which does this for you.
## Publication
To publish these schemas, cut a new version by copying `nightly` to a new folder with a version number matching bottom's

View File

@ -1,7 +1,7 @@
{
"$schema": "http://json-schema.org/draft-07/schema#",
"$id": "https://github.com/ClementTsang/bottom/blob/main/schema/nightly/bottom.json",
"title": "Schema for bottom's configs (nightly)",
"title": "Schema for bottom's config file (nightly)",
"description": "https://clementtsang.github.io/bottom/nightly/configuration/config-file",
"type": "object",
"properties": {
@ -183,10 +183,35 @@
}
}
},
"DiskColumn": {
"type": "string",
"enum": [
"Disk",
"Free",
"Free%",
"Mount",
"R/s",
"Read",
"Rps",
"Total",
"Used",
"Used%",
"W/s",
"Wps",
"Write"
]
},
"DiskConfig": {
"description": "Disk configuration.",
"type": "object",
"properties": {
"columns": {
"description": "A list of disk widget columns.",
"type": "array",
"items": {
"$ref": "#/definitions/DiskColumn"
}
},
"mount_filter": {
"description": "A filter over the mount names.",
"anyOf": [
@ -318,6 +343,12 @@
"null"
]
},
"disable_gpu": {
"type": [
"boolean",
"null"
]
},
"dot_marker": {
"type": [
"boolean",
@ -330,12 +361,6 @@
"null"
]
},
"enable_gpu": {
"type": [
"boolean",
"null"
]
},
"expanded": {
"type": [
"boolean",
@ -677,29 +702,29 @@
"description": "A column in the process widget.",
"type": "string",
"enum": [
"PID",
"Count",
"Name",
"Command",
"CPU%",
"Command",
"Count",
"GMem",
"GMem%",
"GPU%",
"Mem",
"Mem%",
"Name",
"PID",
"R/s",
"Read",
"Rps",
"W/s",
"Write",
"Wps",
"State",
"T.Read",
"TWrite",
"T.Write",
"TRead",
"State",
"User",
"TWrite",
"Time",
"GMem",
"GMem%",
"GPU%"
"User",
"W/s",
"Wps",
"Write"
]
},
"ProcessesConfig": {
@ -930,6 +955,15 @@
}
]
},
"WidgetBorderType": {
"type": "string",
"enum": [
"Default",
"Rounded",
"Double",
"Thick"
]
},
"WidgetStyle": {
"description": "General styling for generic widgets.",
"type": "object",
@ -989,6 +1023,17 @@
}
]
},
"widget_border_type": {
"description": "Widget borders type.",
"anyOf": [
{
"$ref": "#/definitions/WidgetBorderType"
},
{
"type": "null"
}
]
},
"widget_title": {
"description": "Text styling for a widget's title.",
"anyOf": [

View File

@ -1,8 +1,8 @@
{
"$schema": "http://json-schema.org/draft-07/schema#",
"$id": "https://github.com/ClementTsang/bottom/blob/main/schema/nightly/bottom.json",
"title": "Schema for bottom's configs (nightly)",
"description": "https://clementtsang.github.io/bottom/nightly/configuration/config-file",
"$id": "https://github.com/ClementTsang/bottom/blob/main/schema/v0.10/bottom.json",
"title": "Schema for bottom's configs (v0.10)",
"description": "https://clementtsang.github.io/bottom/0.10.0/configuration/config-file/",
"type": "object",
"properties": {
"cpu": {

View File

@ -1,8 +1,8 @@
{
"$schema": "http://json-schema.org/draft-07/schema#",
"$id": "https://github.com/ClementTsang/bottom/blob/main/schema/v1.0/bottom.json",
"$id": "https://github.com/ClementTsang/bottom/blob/main/schema/v0.9/bottom.json",
"$comment": "https://clementtsang.github.io/bottom/0.9.6/configuration/config-file/default-config/",
"title": "Schema for bottom's configs (v1.0)",
"title": "Schema for bottom's configs (v0.9)",
"type": "object",
"definitions": {
"row": {

View File

@ -20,8 +20,6 @@ from urllib.request import Request, urlopen, urlretrieve
# Form of each task is (TASK_ALIAS, FILE_NAME).
TASKS: List[Tuple[str, str]] = [
("freebsd_13_3_build", "bottom_x86_64-unknown-freebsd-13-3.tar.gz"),
("freebsd_14_0_build", "bottom_x86_64-unknown-freebsd-14-0.tar.gz"),
("linux_2_17_build", "bottom_x86_64-unknown-linux-gnu-2-17.tar.gz"),
]
URL = "https://api.cirrus-ci.com/graphql"

View File

@ -1,78 +0,0 @@
import hashlib
import sys
from string import Template
args = sys.argv
version = args[1]
template_file_path = args[2]
generated_file_path = args[3]
# SHA512, SHA256, or SHA1
hash_type = args[4]
# Deployment files
deployment_file_path_1 = args[5]
deployment_file_path_2 = args[6] if len(args) > 6 else None
deployment_file_path_3 = args[7] if len(args) > 7 else None
print("Generating package for file: %s" % deployment_file_path_1)
if deployment_file_path_2 is not None:
print("and for file: %s" % deployment_file_path_2)
if deployment_file_path_3 is not None:
print("and for file: %s" % deployment_file_path_3)
print(" VERSION: %s" % version)
print(" TEMPLATE PATH: %s" % template_file_path)
print(" SAVING AT: %s" % generated_file_path)
print(" USING HASH TYPE: %s" % hash_type)
def get_hash(deployment_file):
if str.lower(hash_type) == "sha512":
deployment_hash = hashlib.sha512(deployment_file.read()).hexdigest()
elif str.lower(hash_type) == "sha256":
deployment_hash = hashlib.sha256(deployment_file.read()).hexdigest()
elif str.lower(hash_type) == "sha1":
deployment_hash = hashlib.sha1(deployment_file.read()).hexdigest()
else:
print(
'Unsupported hash format "%s". Please use SHA512, SHA256, or SHA1.',
hash_type,
)
exit(1)
print("Generated hash: %s" % str(deployment_hash))
return deployment_hash
with open(deployment_file_path_1, "rb") as deployment_file_1:
deployment_hash_1 = get_hash(deployment_file_1)
deployment_hash_2 = None
if deployment_file_path_2 is not None:
with open(deployment_file_path_2, "rb") as deployment_file_2:
deployment_hash_2 = get_hash(deployment_file_2)
deployment_hash_3 = None
if deployment_file_path_3 is not None:
with open(deployment_file_path_3, "rb") as deployment_file_3:
deployment_hash_3 = get_hash(deployment_file_3)
with open(template_file_path, "r") as template_file:
template = Template(template_file.read())
substitutes = dict()
substitutes["version"] = version
substitutes["hash1"] = deployment_hash_1
if deployment_hash_2 is not None:
substitutes["hash2"] = deployment_hash_2
if deployment_hash_3 is not None:
substitutes["hash3"] = deployment_hash_3
substitute = template.safe_substitute(substitutes)
print("\n================== Generated package file ==================\n")
print(substitute)
print("\n============================================================\n")
with open(generated_file_path, "w") as generated_file:
generated_file.write(substitute)

8
scripts/schema/generate.sh Executable file
View File

@ -0,0 +1,8 @@
#!/bin/bash
set -e
cd "$(dirname "$0")";
cd ../..
cargo run --bin schema --features="generate_schema" -- $1 > schema/nightly/bottom.json

View File

@ -1 +1 @@
jsonschema-rs == 0.18.0
jsonschema-rs == 0.26.1

View File

@ -40,7 +40,7 @@ def main():
with open(file, "rb") as f, open(schema) as s:
try:
validator = jsonschema_rs.JSONSchema.from_str(s.read())
validator = jsonschema_rs.validator_for(s.read())
except:
print("Couldn't create validator.")
exit()
@ -51,7 +51,7 @@ def main():
read_file = re.sub(
r"^#(\s\s+)([a-zA-Z\[])", r"\2", read_file, flags=re.MULTILINE
)
print(f"uncommented file: \n{read_file}")
print(f"uncommented file: \n{read_file}\n=====\n")
toml_str = tomllib.loads(read_file)
else:

View File

@ -1,9 +1,7 @@
pub mod data_farmer;
pub mod data;
pub mod filter;
pub mod frozen_state;
pub mod layout_manager;
mod process_killer;
pub mod query;
pub mod states;
use std::{
@ -13,25 +11,22 @@ use std::{
use anyhow::bail;
use concat_string::concat_string;
use data_farmer::*;
use data::*;
use filter::*;
use frozen_state::FrozenState;
use hashbrown::HashMap;
use layout_manager::*;
pub use states::*;
use unicode_segmentation::{GraphemeCursor, UnicodeSegmentation};
use crate::{
canvas::components::time_chart::LegendPosition,
constants, convert_mem_data_points, convert_swap_data_points,
data_collection::{processes::Pid, temperature},
data_conversion::ConvertedData,
get_network_points,
canvas::components::time_graph::LegendPosition,
collection::processes::Pid,
constants,
utils::data_units::DataUnit,
widgets::{ProcWidgetColumn, ProcWidgetMode},
};
#[derive(Debug, Clone, Eq, PartialEq, Default)]
#[derive(Debug, Clone, Eq, PartialEq, Default, Copy)]
pub enum AxisScaling {
#[default]
Log,
@ -43,7 +38,7 @@ pub enum AxisScaling {
#[derive(Debug, Default, Eq, PartialEq)]
pub struct AppConfigFields {
pub update_rate: u64,
pub temperature_type: temperature::TemperatureType,
pub temperature_type: TemperatureType,
pub use_dot: bool,
pub cpu_left_legend: bool,
pub show_average_cpu: bool, // TODO: Unify this in CPU options
@ -106,18 +101,14 @@ pub struct App {
second_char: Option<char>,
pub dd_err: Option<String>, // FIXME: The way we do deletes is really gross.
to_delete_process_list: Option<(String, Vec<Pid>)>,
pub frozen_state: FrozenState,
pub data_store: DataStore,
last_key_press: Instant,
pub converted_data: ConvertedData,
pub data_collection: DataCollection,
pub delete_dialog_state: AppDeleteDialogState,
pub help_dialog_state: AppHelpDialogState,
pub is_expanded: bool,
pub is_force_redraw: bool,
pub is_determining_widget_boundary: bool,
pub basic_mode_use_percent: bool,
#[cfg(target_family = "unix")]
pub user_table: crate::data_collection::processes::UserTable,
pub states: AppWidgetStates,
pub app_config_fields: AppConfigFields,
pub widget_map: HashMap<u64, BottomWidget>,
@ -138,18 +129,14 @@ impl App {
second_char: None,
dd_err: None,
to_delete_process_list: None,
frozen_state: FrozenState::default(),
data_store: DataStore::default(),
last_key_press: Instant::now(),
converted_data: ConvertedData::default(),
data_collection: DataCollection::default(),
delete_dialog_state: AppDeleteDialogState::default(),
help_dialog_state: AppHelpDialogState::default(),
is_expanded,
is_force_redraw: false,
is_determining_widget_boundary: false,
basic_mode_use_percent: false,
#[cfg(target_family = "unix")]
user_table: crate::data_collection::processes::UserTable::default(),
states,
app_config_fields,
widget_map,
@ -161,82 +148,33 @@ impl App {
/// Update the data in the [`App`].
pub fn update_data(&mut self) {
let data_source = match &self.frozen_state {
FrozenState::NotFrozen => &self.data_collection,
FrozenState::Frozen(data) => data,
};
let data_source = self.data_store.get_data();
// FIXME: (points_rework_v1) maybe separate PR but would it make more sense to store references of data?
// Would it also make more sense to move the "data set" step to the draw step, and make it only set if force
// update is set here?
for proc in self.states.proc_state.widget_states.values_mut() {
if proc.force_update_data {
proc.set_table_data(data_source);
proc.force_update_data = false;
}
}
// FIXME: Make this CPU force update less terrible.
if self.states.cpu_state.force_update.is_some() {
self.converted_data.convert_cpu_data(data_source);
self.converted_data.load_avg_data = data_source.load_avg_harvest;
self.states.cpu_state.force_update = None;
}
// FIXME: This is a bit of a temp hack to move data over.
{
let data = &self.converted_data.cpu_data;
for cpu in self.states.cpu_state.widget_states.values_mut() {
cpu.update_table(data);
}
}
{
let data = &self.converted_data.temp_data;
for temp in self.states.temp_state.widget_states.values_mut() {
if temp.force_update_data {
temp.set_table_data(data);
temp.force_update_data = false;
}
}
}
{
let data = &self.converted_data.disk_data;
for disk in self.states.disk_state.widget_states.values_mut() {
if disk.force_update_data {
disk.set_table_data(data);
disk.force_update_data = false;
}
for temp in self.states.temp_state.widget_states.values_mut() {
if temp.force_update_data {
temp.set_table_data(&data_source.temp_data);
}
}
// TODO: [OPT] Prefer reassignment over new vectors?
if self.states.mem_state.force_update.is_some() {
self.converted_data.mem_data = convert_mem_data_points(data_source);
#[cfg(not(target_os = "windows"))]
{
self.converted_data.cache_data = crate::convert_cache_data_points(data_source);
for cpu in self.states.cpu_state.widget_states.values_mut() {
if cpu.force_update_data {
cpu.set_legend_data(&data_source.cpu_harvest);
}
self.converted_data.swap_data = convert_swap_data_points(data_source);
#[cfg(feature = "zfs")]
{
self.converted_data.arc_data = crate::convert_arc_data_points(data_source);
}
#[cfg(feature = "gpu")]
{
self.converted_data.gpu_data = crate::convert_gpu_data(data_source);
}
self.states.mem_state.force_update = None;
}
if self.states.net_state.force_update.is_some() {
let (rx, tx) = get_network_points(
data_source,
&self.app_config_fields.network_scale_type,
&self.app_config_fields.network_unit_type,
self.app_config_fields.network_use_binary_prefix,
);
self.converted_data.network_data_rx = rx;
self.converted_data.network_data_tx = tx;
self.states.net_state.force_update = None;
for disk in self.states.disk_state.widget_states.values_mut() {
if disk.force_update_data {
disk.set_table_data(data_source);
}
}
}
@ -261,16 +199,12 @@ impl App {
self.to_delete_process_list = None;
self.dd_err = None;
// Unfreeze.
self.frozen_state.thaw();
self.data_store.reset();
// Reset zoom
self.reset_cpu_zoom();
self.reset_mem_zoom();
self.reset_net_zoom();
// Reset data
self.data_collection.reset();
}
pub fn should_get_widget_bounds(&self) -> bool {
@ -671,14 +605,6 @@ impl App {
}
}
pub fn get_process_filter(&self, widget_id: u64) -> &Option<query::Query> {
if let Some(process_widget_state) = self.states.proc_state.widget_states.get(&widget_id) {
&process_widget_state.proc_search.search_state.query
} else {
&None
}
}
#[cfg(target_family = "unix")]
pub fn on_number(&mut self, number_char: char) {
if self.delete_dialog_state.is_showing_dd {
@ -776,7 +702,8 @@ impl App {
}
}
BottomWidgetType::Battery => {
if self.converted_data.battery_data.len() > 1 {
#[cfg(feature = "battery")]
if self.data_store.get_data().battery_harvest.len() > 1 {
if let Some(battery_widget_state) = self
.states
.battery_state
@ -837,17 +764,20 @@ impl App {
}
}
BottomWidgetType::Battery => {
if self.converted_data.battery_data.len() > 1 {
let battery_count = self.converted_data.battery_data.len();
if let Some(battery_widget_state) = self
.states
.battery_state
.get_mut_widget_state(self.current_widget.widget_id)
{
if battery_widget_state.currently_selected_battery_index
< battery_count - 1
#[cfg(feature = "battery")]
{
let battery_count = self.data_store.get_data().battery_harvest.len();
if battery_count > 1 {
if let Some(battery_widget_state) = self
.states
.battery_state
.get_mut_widget_state(self.current_widget.widget_id)
{
battery_widget_state.currently_selected_battery_index += 1;
if battery_widget_state.currently_selected_battery_index
< battery_count - 1
{
battery_widget_state.currently_selected_battery_index += 1;
}
}
}
}
@ -1286,9 +1216,7 @@ impl App {
'G' => self.skip_to_last(),
'k' => self.on_up_key(),
'j' => self.on_down_key(),
'f' => {
self.frozen_state.toggle(&self.data_collection); // TODO: Thawing should force a full data refresh and redraw immediately.
}
'f' => self.data_store.toggle_frozen(),
'c' => {
if let BottomWidgetType::Proc = self.current_widget.widget_type {
if let Some(proc_widget_state) = self
@ -1991,7 +1919,7 @@ impl App {
.proc_state
.get_mut_widget_state(self.current_widget.widget_id)
{
proc_widget_state.table.to_first();
proc_widget_state.table.scroll_to_first();
}
}
BottomWidgetType::ProcSort => {
@ -2000,7 +1928,7 @@ impl App {
.proc_state
.get_mut_widget_state(self.current_widget.widget_id - 2)
{
proc_widget_state.sort_table.to_first();
proc_widget_state.sort_table.scroll_to_first();
}
}
BottomWidgetType::Temp => {
@ -2009,7 +1937,7 @@ impl App {
.temp_state
.get_mut_widget_state(self.current_widget.widget_id)
{
temp_widget_state.table.to_first();
temp_widget_state.table.scroll_to_first();
}
}
BottomWidgetType::Disk => {
@ -2018,7 +1946,7 @@ impl App {
.disk_state
.get_mut_widget_state(self.current_widget.widget_id)
{
disk_widget_state.table.to_first();
disk_widget_state.table.scroll_to_first();
}
}
BottomWidgetType::CpuLegend => {
@ -2027,7 +1955,7 @@ impl App {
.cpu_state
.get_mut_widget_state(self.current_widget.widget_id - 1)
{
cpu_widget_state.table.to_first();
cpu_widget_state.table.scroll_to_first();
}
}
@ -2050,7 +1978,7 @@ impl App {
.proc_state
.get_mut_widget_state(self.current_widget.widget_id)
{
proc_widget_state.table.to_last();
proc_widget_state.table.scroll_to_last();
}
}
BottomWidgetType::ProcSort => {
@ -2059,7 +1987,7 @@ impl App {
.proc_state
.get_mut_widget_state(self.current_widget.widget_id - 2)
{
proc_widget_state.sort_table.to_last();
proc_widget_state.sort_table.scroll_to_last();
}
}
BottomWidgetType::Temp => {
@ -2068,7 +1996,7 @@ impl App {
.temp_state
.get_mut_widget_state(self.current_widget.widget_id)
{
temp_widget_state.table.to_last();
temp_widget_state.table.scroll_to_last();
}
}
BottomWidgetType::Disk => {
@ -2077,8 +2005,8 @@ impl App {
.disk_state
.get_mut_widget_state(self.current_widget.widget_id)
{
if !self.converted_data.disk_data.is_empty() {
disk_widget_state.table.to_last();
if !self.data_store.get_data().disk_harvest.is_empty() {
disk_widget_state.table.scroll_to_last();
}
}
}
@ -2088,7 +2016,7 @@ impl App {
.cpu_state
.get_mut_widget_state(self.current_widget.widget_id - 1)
{
cpu_widget_state.table.to_last();
cpu_widget_state.table.scroll_to_last();
}
}
_ => {}
@ -2284,7 +2212,6 @@ impl App {
if new_time <= self.app_config_fields.retention_ms {
cpu_widget_state.current_display_time = new_time;
self.states.cpu_state.force_update = Some(self.current_widget.widget_id);
if self.app_config_fields.autohide_time {
cpu_widget_state.autohide_timer = Some(Instant::now());
}
@ -2292,7 +2219,6 @@ impl App {
!= self.app_config_fields.retention_ms
{
cpu_widget_state.current_display_time = self.app_config_fields.retention_ms;
self.states.cpu_state.force_update = Some(self.current_widget.widget_id);
if self.app_config_fields.autohide_time {
cpu_widget_state.autohide_timer = Some(Instant::now());
}
@ -2312,7 +2238,6 @@ impl App {
if new_time <= self.app_config_fields.retention_ms {
mem_widget_state.current_display_time = new_time;
self.states.mem_state.force_update = Some(self.current_widget.widget_id);
if self.app_config_fields.autohide_time {
mem_widget_state.autohide_timer = Some(Instant::now());
}
@ -2320,7 +2245,6 @@ impl App {
!= self.app_config_fields.retention_ms
{
mem_widget_state.current_display_time = self.app_config_fields.retention_ms;
self.states.mem_state.force_update = Some(self.current_widget.widget_id);
if self.app_config_fields.autohide_time {
mem_widget_state.autohide_timer = Some(Instant::now());
}
@ -2340,7 +2264,6 @@ impl App {
if new_time <= self.app_config_fields.retention_ms {
net_widget_state.current_display_time = new_time;
self.states.net_state.force_update = Some(self.current_widget.widget_id);
if self.app_config_fields.autohide_time {
net_widget_state.autohide_timer = Some(Instant::now());
}
@ -2348,7 +2271,6 @@ impl App {
!= self.app_config_fields.retention_ms
{
net_widget_state.current_display_time = self.app_config_fields.retention_ms;
self.states.net_state.force_update = Some(self.current_widget.widget_id);
if self.app_config_fields.autohide_time {
net_widget_state.autohide_timer = Some(Instant::now());
}
@ -2374,7 +2296,6 @@ impl App {
if new_time >= constants::STALE_MIN_MILLISECONDS {
cpu_widget_state.current_display_time = new_time;
self.states.cpu_state.force_update = Some(self.current_widget.widget_id);
if self.app_config_fields.autohide_time {
cpu_widget_state.autohide_timer = Some(Instant::now());
}
@ -2382,7 +2303,6 @@ impl App {
!= constants::STALE_MIN_MILLISECONDS
{
cpu_widget_state.current_display_time = constants::STALE_MIN_MILLISECONDS;
self.states.cpu_state.force_update = Some(self.current_widget.widget_id);
if self.app_config_fields.autohide_time {
cpu_widget_state.autohide_timer = Some(Instant::now());
}
@ -2402,7 +2322,6 @@ impl App {
if new_time >= constants::STALE_MIN_MILLISECONDS {
mem_widget_state.current_display_time = new_time;
self.states.mem_state.force_update = Some(self.current_widget.widget_id);
if self.app_config_fields.autohide_time {
mem_widget_state.autohide_timer = Some(Instant::now());
}
@ -2410,7 +2329,6 @@ impl App {
!= constants::STALE_MIN_MILLISECONDS
{
mem_widget_state.current_display_time = constants::STALE_MIN_MILLISECONDS;
self.states.mem_state.force_update = Some(self.current_widget.widget_id);
if self.app_config_fields.autohide_time {
mem_widget_state.autohide_timer = Some(Instant::now());
}
@ -2430,7 +2348,6 @@ impl App {
if new_time >= constants::STALE_MIN_MILLISECONDS {
net_widget_state.current_display_time = new_time;
self.states.net_state.force_update = Some(self.current_widget.widget_id);
if self.app_config_fields.autohide_time {
net_widget_state.autohide_timer = Some(Instant::now());
}
@ -2438,7 +2355,6 @@ impl App {
!= constants::STALE_MIN_MILLISECONDS
{
net_widget_state.current_display_time = constants::STALE_MIN_MILLISECONDS;
self.states.net_state.force_update = Some(self.current_widget.widget_id);
if self.app_config_fields.autohide_time {
net_widget_state.autohide_timer = Some(Instant::now());
}
@ -2457,7 +2373,6 @@ impl App {
.get_mut(&self.current_widget.widget_id)
{
cpu_widget_state.current_display_time = self.app_config_fields.default_time_value;
self.states.cpu_state.force_update = Some(self.current_widget.widget_id);
if self.app_config_fields.autohide_time {
cpu_widget_state.autohide_timer = Some(Instant::now());
}
@ -2472,7 +2387,6 @@ impl App {
.get_mut(&self.current_widget.widget_id)
{
mem_widget_state.current_display_time = self.app_config_fields.default_time_value;
self.states.mem_state.force_update = Some(self.current_widget.widget_id);
if self.app_config_fields.autohide_time {
mem_widget_state.autohide_timer = Some(Instant::now());
}
@ -2487,7 +2401,6 @@ impl App {
.get_mut(&self.current_widget.widget_id)
{
net_widget_state.current_display_time = self.app_config_fields.default_time_value;
self.states.net_state.force_update = Some(self.current_widget.widget_id);
if self.app_config_fields.autohide_time {
net_widget_state.autohide_timer = Some(Instant::now());
}
@ -2802,6 +2715,7 @@ impl App {
}
}
BottomWidgetType::Battery => {
#[cfg(feature = "battery")]
if let Some(battery_widget_state) = self
.states
.battery_state
@ -2813,10 +2727,12 @@ impl App {
{
if (x >= *tlc_x && y >= *tlc_y) && (x <= *brc_x && y <= *brc_y)
{
if itx >= self.converted_data.battery_data.len() {
let num_batteries =
self.data_store.get_data().battery_harvest.len();
if itx >= num_batteries {
// range check to keep within current data
battery_widget_state.currently_selected_battery_index =
self.converted_data.battery_data.len() - 1;
num_batteries - 1;
} else {
battery_widget_state.currently_selected_battery_index =
itx;

13
src/app/data/mod.rs Normal file
View File

@ -0,0 +1,13 @@
//! How we manage data internally.
mod time_series;
pub use time_series::{TimeSeriesData, Values};
mod process;
pub use process::ProcessData;
mod store;
pub use store::*;
mod temperature;
pub use temperature::*;

55
src/app/data/process.rs Normal file
View File

@ -0,0 +1,55 @@
use std::{collections::BTreeMap, vec::Vec};
use hashbrown::HashMap;
use crate::collection::processes::{Pid, ProcessHarvest};
#[derive(Clone, Debug, Default)]
pub struct ProcessData {
/// A PID to process data map.
pub process_harvest: BTreeMap<Pid, ProcessHarvest>,
/// A mapping between a process PID to any children process PIDs.
pub process_parent_mapping: HashMap<Pid, Vec<Pid>>,
/// PIDs corresponding to processes that have no parents.
pub orphan_pids: Vec<Pid>,
}
impl ProcessData {
pub(super) fn ingest(&mut self, list_of_processes: Vec<ProcessHarvest>) {
self.process_parent_mapping.clear();
// Reverse as otherwise the pid mappings are in the wrong order.
list_of_processes.iter().rev().for_each(|process_harvest| {
if let Some(parent_pid) = process_harvest.parent_pid {
if let Some(entry) = self.process_parent_mapping.get_mut(&parent_pid) {
entry.push(process_harvest.pid);
} else {
self.process_parent_mapping
.insert(parent_pid, vec![process_harvest.pid]);
}
}
});
self.process_parent_mapping.shrink_to_fit();
let process_pid_map = list_of_processes
.into_iter()
.map(|process| (process.pid, process))
.collect();
self.process_harvest = process_pid_map;
// We collect all processes that either:
// - Do not have a parent PID (that is, they are orphan processes)
// - Have a parent PID but we don't have the parent (we promote them as orphans)
self.orphan_pids = self
.process_harvest
.iter()
.filter_map(|(pid, process_harvest)| match process_harvest.parent_pid {
Some(parent_pid) if self.process_harvest.contains_key(&parent_pid) => None,
_ => Some(*pid),
})
.collect();
}
}

324
src/app/data/store.rs Normal file
View File

@ -0,0 +1,324 @@
use std::{
time::{Duration, Instant},
vec::Vec,
};
use super::{ProcessData, TimeSeriesData};
#[cfg(feature = "battery")]
use crate::collection::batteries;
use crate::{
app::AppConfigFields,
collection::{Data, cpu, disks, memory::MemData, network},
dec_bytes_per_second_string,
utils::data_units::DataUnit,
widgets::{DiskWidgetData, TempWidgetData},
};
/// A collection of data. This is where we dump data into.
///
/// TODO: Maybe reduce visibility of internal data, make it only accessible through DataStore?
#[derive(Debug, Clone)]
pub struct StoredData {
pub last_update_time: Instant, // FIXME: (points_rework_v1) remove this?
pub timeseries_data: TimeSeriesData,
pub network_harvest: network::NetworkHarvest,
pub ram_harvest: Option<MemData>,
pub swap_harvest: Option<MemData>,
#[cfg(not(target_os = "windows"))]
pub cache_harvest: Option<MemData>,
#[cfg(feature = "zfs")]
pub arc_harvest: Option<MemData>,
#[cfg(feature = "gpu")]
pub gpu_harvest: Vec<(String, MemData)>,
pub cpu_harvest: cpu::CpuHarvest,
pub load_avg_harvest: cpu::LoadAvgHarvest,
pub process_data: ProcessData,
/// TODO: (points_rework_v1) Might be a better way to do this without having to store here?
pub prev_io: Vec<(u64, u64)>,
pub disk_harvest: Vec<DiskWidgetData>,
pub temp_data: Vec<TempWidgetData>,
#[cfg(feature = "battery")]
pub battery_harvest: Vec<batteries::BatteryData>,
}
impl Default for StoredData {
fn default() -> Self {
StoredData {
last_update_time: Instant::now(),
timeseries_data: TimeSeriesData::default(),
network_harvest: network::NetworkHarvest::default(),
ram_harvest: None,
#[cfg(not(target_os = "windows"))]
cache_harvest: None,
swap_harvest: None,
cpu_harvest: cpu::CpuHarvest::default(),
load_avg_harvest: cpu::LoadAvgHarvest::default(),
process_data: Default::default(),
prev_io: Vec::default(),
disk_harvest: Vec::default(),
temp_data: Vec::default(),
#[cfg(feature = "battery")]
battery_harvest: Vec::default(),
#[cfg(feature = "zfs")]
arc_harvest: None,
#[cfg(feature = "gpu")]
gpu_harvest: Vec::default(),
}
}
}
impl StoredData {
pub fn reset(&mut self) {
*self = StoredData::default();
}
#[allow(
clippy::boxed_local,
reason = "This avoids warnings on certain platforms (e.g. 32-bit)."
)]
fn eat_data(&mut self, mut data: Box<Data>, settings: &AppConfigFields) {
let harvested_time = data.collection_time;
// We must adjust all the network values to their selected type (defaults to bits).
if matches!(settings.network_unit_type, DataUnit::Byte) {
if let Some(network) = &mut data.network {
network.rx /= 8;
network.tx /= 8;
}
}
if !settings.use_basic_mode {
self.timeseries_data.add(&data);
}
if let Some(network) = data.network {
self.network_harvest = network;
}
self.ram_harvest = data.memory;
self.swap_harvest = data.swap;
#[cfg(not(target_os = "windows"))]
{
self.cache_harvest = data.cache;
}
#[cfg(feature = "zfs")]
{
self.arc_harvest = data.arc;
}
#[cfg(feature = "gpu")]
if let Some(gpu) = data.gpu {
self.gpu_harvest = gpu;
}
if let Some(cpu) = data.cpu {
self.cpu_harvest = cpu;
}
if let Some(load_avg) = data.load_avg {
self.load_avg_harvest = load_avg;
}
self.temp_data = data
.temperature_sensors
.map(|sensors| {
sensors
.into_iter()
.map(|temp| TempWidgetData {
sensor: temp.name,
temperature: temp
.temperature
.map(|c| settings.temperature_type.convert_temp_unit(c)),
})
.collect()
})
.unwrap_or_default();
if let Some(disks) = data.disks {
if let Some(io) = data.io {
self.eat_disks(disks, io, harvested_time);
}
}
if let Some(list_of_processes) = data.list_of_processes {
self.process_data.ingest(list_of_processes);
}
#[cfg(feature = "battery")]
{
if let Some(list_of_batteries) = data.list_of_batteries {
self.battery_harvest = list_of_batteries;
}
}
// And we're done eating. Update time and push the new entry!
self.last_update_time = harvested_time;
}
fn eat_disks(
&mut self, disks: Vec<disks::DiskHarvest>, io: disks::IoHarvest, harvested_time: Instant,
) {
let time_since_last_harvest = harvested_time
.duration_since(self.last_update_time)
.as_secs_f64();
self.disk_harvest.clear();
let prev_io_diff = disks.len().saturating_sub(self.prev_io.len());
self.prev_io.reserve(prev_io_diff);
self.prev_io.extend((0..prev_io_diff).map(|_| (0, 0)));
for (itx, device) in disks.into_iter().enumerate() {
let Some(checked_name) = ({
#[cfg(target_os = "windows")]
{
match &device.volume_name {
Some(volume_name) => Some(volume_name.as_str()),
None => device.name.split('/').last(),
}
}
#[cfg(not(target_os = "windows"))]
{
#[cfg(feature = "zfs")]
{
if !device.name.starts_with('/') {
Some(device.name.as_str()) // use the whole zfs
// dataset name
} else {
device.name.split('/').last()
}
}
#[cfg(not(feature = "zfs"))]
{
device.name.split('/').last()
}
}
}) else {
continue;
};
let io_device = {
#[cfg(target_os = "macos")]
{
use std::sync::OnceLock;
use regex::Regex;
// Must trim one level further for macOS!
static DISK_REGEX: OnceLock<Regex> = OnceLock::new();
#[expect(
clippy::regex_creation_in_loops,
reason = "this is fine since it's done via a static OnceLock. In the future though, separate it out."
)]
if let Some(new_name) = DISK_REGEX
.get_or_init(|| Regex::new(r"disk\d+").unwrap())
.find(checked_name)
{
io.get(new_name.as_str())
} else {
None
}
}
#[cfg(not(target_os = "macos"))]
{
io.get(checked_name)
}
};
let (mut io_read, mut io_write) = ("N/A".into(), "N/A".into());
if let Some(Some(io_device)) = io_device {
if let Some(prev_io) = self.prev_io.get_mut(itx) {
let r_rate = ((io_device.read_bytes.saturating_sub(prev_io.0)) as f64
/ time_since_last_harvest)
.round() as u64;
let w_rate = ((io_device.write_bytes.saturating_sub(prev_io.1)) as f64
/ time_since_last_harvest)
.round() as u64;
*prev_io = (io_device.read_bytes, io_device.write_bytes);
io_read = dec_bytes_per_second_string(r_rate).into();
io_write = dec_bytes_per_second_string(w_rate).into();
}
}
let summed_total_bytes = match (device.used_space, device.free_space) {
(Some(used), Some(free)) => Some(used + free),
_ => None,
};
self.disk_harvest.push(DiskWidgetData {
name: device.name,
mount_point: device.mount_point,
free_bytes: device.free_space,
used_bytes: device.used_space,
total_bytes: device.total_space,
summed_total_bytes,
io_read,
io_write,
});
}
}
}
/// If we freeze data collection updates, we want to return a "frozen" copy
/// of the data at the time, while still updating things in the background.
#[derive(Default)]
pub enum FrozenState {
#[default]
NotFrozen,
Frozen(Box<StoredData>),
}
/// What data to share to other parts of the application.
#[derive(Default)]
pub struct DataStore {
frozen_state: FrozenState,
main: StoredData,
}
impl DataStore {
/// Toggle whether the [`DataState`] is frozen or not.
pub fn toggle_frozen(&mut self) {
match &self.frozen_state {
FrozenState::NotFrozen => {
self.frozen_state = FrozenState::Frozen(Box::new(self.main.clone()));
}
FrozenState::Frozen(_) => self.frozen_state = FrozenState::NotFrozen,
}
}
/// Return whether the [`DataState`] is frozen or not.
pub fn is_frozen(&self) -> bool {
matches!(self.frozen_state, FrozenState::Frozen(_))
}
/// Return a reference to the currently available data. Note that if the data is
/// in a frozen state, it will return the snapshot of data from when it was frozen.
pub fn get_data(&self) -> &StoredData {
match &self.frozen_state {
FrozenState::NotFrozen => &self.main,
FrozenState::Frozen(collected_data) => collected_data,
}
}
/// Eat data.
pub fn eat_data(&mut self, data: Box<Data>, settings: &AppConfigFields) {
self.main.eat_data(data, settings);
}
/// Clean data.
pub fn clean_data(&mut self, max_duration: Duration) {
self.main.timeseries_data.prune(max_duration);
}
/// Reset data state.
pub fn reset(&mut self) {
self.frozen_state = FrozenState::NotFrozen;
self.main = StoredData::default();
}
}

View File

@ -0,0 +1,83 @@
//! Code around temperature data.
use std::{fmt::Display, str::FromStr};
#[derive(Clone, Debug, Copy, PartialEq, Eq, Default)]
pub enum TemperatureType {
#[default]
Celsius,
Kelvin,
Fahrenheit,
}
impl FromStr for TemperatureType {
type Err = String;
fn from_str(s: &str) -> Result<Self, Self::Err> {
match s {
"fahrenheit" | "f" => Ok(TemperatureType::Fahrenheit),
"kelvin" | "k" => Ok(TemperatureType::Kelvin),
"celsius" | "c" => Ok(TemperatureType::Celsius),
_ => Err(format!(
"'{s}' is an invalid temperature type, use one of: [kelvin, k, celsius, c, fahrenheit, f]."
)),
}
}
}
impl TemperatureType {
/// Given a temperature in Celsius, covert it if necessary for a different
/// unit.
pub fn convert_temp_unit(&self, celsius: f32) -> TypedTemperature {
match self {
TemperatureType::Celsius => TypedTemperature::Celsius(celsius.ceil() as u32),
TemperatureType::Kelvin => TypedTemperature::Kelvin((celsius + 273.15).ceil() as u32),
TemperatureType::Fahrenheit => {
TypedTemperature::Fahrenheit(((celsius * (9.0 / 5.0)) + 32.0).ceil() as u32)
}
}
}
}
/// A temperature and its type.
#[derive(Debug, PartialEq, Clone, Eq, PartialOrd, Ord)]
pub enum TypedTemperature {
Celsius(u32),
Kelvin(u32),
Fahrenheit(u32),
}
impl Display for TypedTemperature {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
match self {
TypedTemperature::Celsius(val) => write!(f, "{val}°C"),
TypedTemperature::Kelvin(val) => write!(f, "{val}K"),
TypedTemperature::Fahrenheit(val) => write!(f, "{val}°F"),
}
}
}
#[cfg(test)]
mod test {
use super::*;
#[test]
fn temp_conversions() {
const TEMP: f32 = 100.0;
assert_eq!(
TemperatureType::Celsius.convert_temp_unit(TEMP),
TypedTemperature::Celsius(TEMP as u32),
);
assert_eq!(
TemperatureType::Kelvin.convert_temp_unit(TEMP),
TypedTemperature::Kelvin(373.15_f32.ceil() as u32)
);
assert_eq!(
TemperatureType::Fahrenheit.convert_temp_unit(TEMP),
TypedTemperature::Fahrenheit(212)
);
}
}

229
src/app/data/time_series.rs Normal file
View File

@ -0,0 +1,229 @@
//! Time series data.
use std::{
cmp::Ordering,
time::{Duration, Instant},
vec::Vec,
};
#[cfg(feature = "gpu")]
use hashbrown::{HashMap, HashSet}; // TODO: Try fxhash again.
use timeless::data::ChunkedData;
use crate::collection::Data;
/// Values corresponding to a time slice.
pub type Values = ChunkedData<f64>;
/// Represents time series data in a chunked, deduped manner.
///
/// Properties:
/// - Time in this manner is represented in a reverse-offset fashion from the current time.
/// - All data is stored in SoA fashion.
/// - Values are stored in a chunked format, which facilitates gaps in data collection if needed.
/// - Additional metadata is stored to make data pruning over time easy.
#[derive(Clone, Debug, Default)]
pub struct TimeSeriesData {
/// Time values.
///
/// TODO: (points_rework_v1) Either store millisecond-level only or offsets only.
pub time: Vec<Instant>,
/// Network RX data.
pub rx: Values,
/// Network TX data.
pub tx: Values,
/// CPU data.
pub cpu: Vec<Values>,
/// RAM memory data.
pub ram: Values,
/// Swap data.
pub swap: Values,
#[cfg(not(target_os = "windows"))]
/// Cache data.
pub cache_mem: Values,
#[cfg(feature = "zfs")]
/// Arc data.
pub arc_mem: Values,
#[cfg(feature = "gpu")]
/// GPU memory data.
pub gpu_mem: HashMap<String, Values>,
}
impl TimeSeriesData {
/// Add a new data point.
pub fn add(&mut self, data: &Data) {
self.time.push(data.collection_time);
if let Some(network) = &data.network {
self.rx.push(network.rx as f64);
self.tx.push(network.tx as f64);
} else {
self.rx.insert_break();
self.tx.insert_break();
}
if let Some(cpu) = &data.cpu {
match self.cpu.len().cmp(&cpu.len()) {
Ordering::Less => {
let diff = cpu.len() - self.cpu.len();
self.cpu.reserve_exact(diff);
for _ in 0..diff {
self.cpu.push(Default::default());
}
}
Ordering::Greater => {
let diff = self.cpu.len() - cpu.len();
let offset = self.cpu.len() - diff;
for curr in &mut self.cpu[offset..] {
curr.insert_break();
}
}
Ordering::Equal => {}
}
for (curr, new_data) in self.cpu.iter_mut().zip(cpu.iter()) {
curr.push(new_data.cpu_usage);
}
} else {
for c in &mut self.cpu {
c.insert_break();
}
}
if let Some(memory) = &data.memory {
self.ram.push(memory.percentage());
} else {
self.ram.insert_break();
}
if let Some(swap) = &data.swap {
self.swap.push(swap.percentage());
} else {
self.swap.insert_break();
}
#[cfg(not(target_os = "windows"))]
{
if let Some(cache) = &data.cache {
self.cache_mem.push(cache.percentage());
} else {
self.cache_mem.insert_break();
}
}
#[cfg(feature = "zfs")]
{
if let Some(arc) = &data.arc {
self.arc_mem.push(arc.percentage());
} else {
self.arc_mem.insert_break();
}
}
#[cfg(feature = "gpu")]
{
if let Some(gpu) = &data.gpu {
let mut not_visited = self
.gpu_mem
.keys()
.map(String::to_owned)
.collect::<HashSet<_>>();
for (name, new_data) in gpu {
not_visited.remove(name);
if !self.gpu_mem.contains_key(name) {
self.gpu_mem
.insert(name.to_string(), ChunkedData::default());
}
let curr = self
.gpu_mem
.get_mut(name)
.expect("entry must exist as it was created above");
curr.push(new_data.percentage());
}
for nv in not_visited {
if let Some(entry) = self.gpu_mem.get_mut(&nv) {
entry.insert_break();
}
}
} else {
for g in self.gpu_mem.values_mut() {
g.insert_break();
}
}
}
}
/// Prune any data older than the given duration.
pub fn prune(&mut self, max_age: Duration) {
if self.time.is_empty() {
return;
}
let now = Instant::now();
let end = {
let partition_point = self
.time
.partition_point(|then| now.duration_since(*then) > max_age);
// Partition point returns the first index that does not match the predicate, so minus one.
if partition_point > 0 {
partition_point - 1
} else {
// If the partition point was 0, then it means all values are too new to be pruned.
crate::info!("Skipping prune.");
return;
}
};
crate::info!("Pruning up to index {end}.");
// Note that end here is _inclusive_.
self.time.drain(0..=end);
self.time.shrink_to_fit();
let _ = self.rx.prune_and_shrink_to_fit(end);
let _ = self.tx.prune_and_shrink_to_fit(end);
for cpu in &mut self.cpu {
let _ = cpu.prune_and_shrink_to_fit(end);
}
let _ = self.ram.prune_and_shrink_to_fit(end);
let _ = self.swap.prune_and_shrink_to_fit(end);
#[cfg(not(target_os = "windows"))]
let _ = self.cache_mem.prune_and_shrink_to_fit(end);
#[cfg(feature = "zfs")]
let _ = self.arc_mem.prune_and_shrink_to_fit(end);
#[cfg(feature = "gpu")]
{
self.gpu_mem.retain(|_, gpu| {
let _ = gpu.prune(end);
// Remove the entry if it is empty. We can always add it again later.
if gpu.no_elements() {
false
} else {
gpu.shrink_to_fit();
true
}
});
}
}
}

View File

@ -1,480 +0,0 @@
//! In charge of cleaning, processing, and managing data. I couldn't think of
//! a better name for the file. Since I called data collection "harvesting",
//! then this is the farmer I guess.
//!
//! Essentially the main goal is to shift the initial calculation and
//! distribution of joiner points and data to one central location that will
//! only do it *once* upon receiving the data --- as opposed to doing it on
//! canvas draw, which will be a costly process.
//!
//! This will also handle the *cleaning* of stale data. That should be done
//! in some manner (timer on another thread, some loop) that will occasionally
//! call the purging function. Failure to do so *will* result in a growing
//! memory usage and higher CPU usage - you will be trying to process more and
//! more points as this is used!
use std::{collections::BTreeMap, time::Instant, vec::Vec};
use hashbrown::HashMap;
#[cfg(feature = "battery")]
use crate::data_collection::batteries;
use crate::{
data_collection::{
cpu, disks, memory, network,
processes::{Pid, ProcessHarvest},
temperature, Data,
},
utils::data_prefixes::*,
};
pub type TimeOffset = f64;
pub type Value = f64;
#[derive(Debug, Default, Clone)]
pub struct TimedData {
pub rx_data: Value,
pub tx_data: Value,
pub cpu_data: Vec<Value>,
pub load_avg_data: [f32; 3],
pub mem_data: Option<Value>,
#[cfg(not(target_os = "windows"))]
pub cache_data: Option<Value>,
pub swap_data: Option<Value>,
#[cfg(feature = "zfs")]
pub arc_data: Option<Value>,
#[cfg(feature = "gpu")]
pub gpu_data: Vec<Option<Value>>,
}
#[derive(Clone, Debug, Default)]
pub struct ProcessData {
/// A PID to process data map.
pub process_harvest: BTreeMap<Pid, ProcessHarvest>,
/// A mapping between a process PID to any children process PIDs.
pub process_parent_mapping: HashMap<Pid, Vec<Pid>>,
/// PIDs corresponding to processes that have no parents.
pub orphan_pids: Vec<Pid>,
}
impl ProcessData {
fn ingest(&mut self, list_of_processes: Vec<ProcessHarvest>) {
self.process_parent_mapping.clear();
// Reverse as otherwise the pid mappings are in the wrong order.
list_of_processes.iter().rev().for_each(|process_harvest| {
if let Some(parent_pid) = process_harvest.parent_pid {
if let Some(entry) = self.process_parent_mapping.get_mut(&parent_pid) {
entry.push(process_harvest.pid);
} else {
self.process_parent_mapping
.insert(parent_pid, vec![process_harvest.pid]);
}
}
});
self.process_parent_mapping.shrink_to_fit();
let process_pid_map = list_of_processes
.into_iter()
.map(|process| (process.pid, process))
.collect();
self.process_harvest = process_pid_map;
// We collect all processes that either:
// - Do not have a parent PID (that is, they are orphan processes)
// - Have a parent PID but we don't have the parent (we promote them as orphans)
self.orphan_pids = self
.process_harvest
.iter()
.filter_map(|(pid, process_harvest)| match process_harvest.parent_pid {
Some(parent_pid) if self.process_harvest.contains_key(&parent_pid) => None,
_ => Some(*pid),
})
.collect();
}
}
/// AppCollection represents the pooled data stored within the main app
/// thread. Basically stores a (occasionally cleaned) record of the data
/// collected, and what is needed to convert into a displayable form.
///
/// If the app is *frozen* - that is, we do not want to *display* any changing
/// data, keep updating this. As of 2021-09-08, we just clone the current
/// collection when it freezes to have a snapshot floating around.
///
/// Note that with this method, the *app* thread is responsible for cleaning -
/// not the data collector.
#[derive(Debug, Clone)]
pub struct DataCollection {
pub current_instant: Instant,
pub timed_data_vec: Vec<(Instant, TimedData)>,
pub network_harvest: network::NetworkHarvest,
pub memory_harvest: memory::MemHarvest,
#[cfg(not(target_os = "windows"))]
pub cache_harvest: memory::MemHarvest,
pub swap_harvest: memory::MemHarvest,
pub cpu_harvest: cpu::CpuHarvest,
pub load_avg_harvest: cpu::LoadAvgHarvest,
pub process_data: ProcessData,
pub disk_harvest: Vec<disks::DiskHarvest>,
pub io_harvest: disks::IoHarvest,
pub io_labels_and_prev: Vec<((u64, u64), (u64, u64))>,
pub io_labels: Vec<(String, String)>,
pub temp_harvest: Vec<temperature::TempHarvest>,
#[cfg(feature = "battery")]
pub battery_harvest: Vec<batteries::BatteryHarvest>,
#[cfg(feature = "zfs")]
pub arc_harvest: memory::MemHarvest,
#[cfg(feature = "gpu")]
pub gpu_harvest: Vec<(String, memory::MemHarvest)>,
}
impl Default for DataCollection {
fn default() -> Self {
DataCollection {
current_instant: Instant::now(),
timed_data_vec: Vec::default(),
network_harvest: network::NetworkHarvest::default(),
memory_harvest: memory::MemHarvest::default(),
#[cfg(not(target_os = "windows"))]
cache_harvest: memory::MemHarvest::default(),
swap_harvest: memory::MemHarvest::default(),
cpu_harvest: cpu::CpuHarvest::default(),
load_avg_harvest: cpu::LoadAvgHarvest::default(),
process_data: Default::default(),
disk_harvest: Vec::default(),
io_harvest: disks::IoHarvest::default(),
io_labels_and_prev: Vec::default(),
io_labels: Vec::default(),
temp_harvest: Vec::default(),
#[cfg(feature = "battery")]
battery_harvest: Vec::default(),
#[cfg(feature = "zfs")]
arc_harvest: memory::MemHarvest::default(),
#[cfg(feature = "gpu")]
gpu_harvest: Vec::default(),
}
}
}
impl DataCollection {
pub fn reset(&mut self) {
self.timed_data_vec = Vec::default();
self.network_harvest = network::NetworkHarvest::default();
self.memory_harvest = memory::MemHarvest::default();
self.swap_harvest = memory::MemHarvest::default();
self.cpu_harvest = cpu::CpuHarvest::default();
self.process_data = Default::default();
self.disk_harvest = Vec::default();
self.io_harvest = disks::IoHarvest::default();
self.io_labels_and_prev = Vec::default();
self.temp_harvest = Vec::default();
#[cfg(feature = "battery")]
{
self.battery_harvest = Vec::default();
}
#[cfg(feature = "zfs")]
{
self.arc_harvest = memory::MemHarvest::default();
}
#[cfg(feature = "gpu")]
{
self.gpu_harvest = Vec::default();
}
}
pub fn clean_data(&mut self, max_time_millis: u64) {
let current_time = Instant::now();
let remove_index = match self
.timed_data_vec
.binary_search_by(|(instant, _timed_data)| {
current_time
.duration_since(*instant)
.as_millis()
.cmp(&(max_time_millis.into()))
.reverse()
}) {
Ok(index) => index,
Err(index) => index,
};
self.timed_data_vec.drain(0..remove_index);
self.timed_data_vec.shrink_to_fit();
}
pub fn eat_data(&mut self, harvested_data: Box<Data>) {
let harvested_time = harvested_data.collection_time;
let mut new_entry = TimedData::default();
// Network
if let Some(network) = harvested_data.network {
self.eat_network(network, &mut new_entry);
}
// Memory, Swap
if let (Some(memory), Some(swap)) = (harvested_data.memory, harvested_data.swap) {
self.eat_memory_and_swap(memory, swap, &mut new_entry);
}
// Cache memory
#[cfg(not(target_os = "windows"))]
if let Some(cache) = harvested_data.cache {
self.eat_cache(cache, &mut new_entry);
}
#[cfg(feature = "zfs")]
if let Some(arc) = harvested_data.arc {
self.eat_arc(arc, &mut new_entry);
}
#[cfg(feature = "gpu")]
if let Some(gpu) = harvested_data.gpu {
self.eat_gpu(gpu, &mut new_entry);
}
// CPU
if let Some(cpu) = harvested_data.cpu {
self.eat_cpu(cpu, &mut new_entry);
}
// Load average
if let Some(load_avg) = harvested_data.load_avg {
self.eat_load_avg(load_avg, &mut new_entry);
}
// Temp
if let Some(temperature_sensors) = harvested_data.temperature_sensors {
self.eat_temp(temperature_sensors);
}
// Disks
if let Some(disks) = harvested_data.disks {
if let Some(io) = harvested_data.io {
self.eat_disks(disks, io, harvested_time);
}
}
// Processes
if let Some(list_of_processes) = harvested_data.list_of_processes {
self.eat_proc(list_of_processes);
}
#[cfg(feature = "battery")]
{
// Battery
if let Some(list_of_batteries) = harvested_data.list_of_batteries {
self.eat_battery(list_of_batteries);
}
}
// And we're done eating. Update time and push the new entry!
self.current_instant = harvested_time;
self.timed_data_vec.push((harvested_time, new_entry));
}
fn eat_memory_and_swap(
&mut self, memory: memory::MemHarvest, swap: memory::MemHarvest, new_entry: &mut TimedData,
) {
// Memory
new_entry.mem_data = memory.use_percent;
// Swap
new_entry.swap_data = swap.use_percent;
// In addition copy over latest data for easy reference
self.memory_harvest = memory;
self.swap_harvest = swap;
}
#[cfg(not(target_os = "windows"))]
fn eat_cache(&mut self, cache: memory::MemHarvest, new_entry: &mut TimedData) {
// Cache and buffer memory
new_entry.cache_data = cache.use_percent;
// In addition copy over latest data for easy reference
self.cache_harvest = cache;
}
fn eat_network(&mut self, network: network::NetworkHarvest, new_entry: &mut TimedData) {
// RX
if network.rx > 0 {
new_entry.rx_data = network.rx as f64;
}
// TX
if network.tx > 0 {
new_entry.tx_data = network.tx as f64;
}
// In addition copy over latest data for easy reference
self.network_harvest = network;
}
fn eat_cpu(&mut self, cpu: Vec<cpu::CpuData>, new_entry: &mut TimedData) {
// Note this only pre-calculates the data points - the names will be
// within the local copy of cpu_harvest. Since it's all sequential
// it probably doesn't matter anyways.
cpu.iter()
.for_each(|cpu| new_entry.cpu_data.push(cpu.cpu_usage));
self.cpu_harvest = cpu;
}
fn eat_load_avg(&mut self, load_avg: cpu::LoadAvgHarvest, new_entry: &mut TimedData) {
new_entry.load_avg_data = load_avg;
self.load_avg_harvest = load_avg;
}
fn eat_temp(&mut self, temperature_sensors: Vec<temperature::TempHarvest>) {
self.temp_harvest = temperature_sensors;
}
fn eat_disks(
&mut self, disks: Vec<disks::DiskHarvest>, io: disks::IoHarvest, harvested_time: Instant,
) {
let time_since_last_harvest = harvested_time
.duration_since(self.current_instant)
.as_secs_f64();
for (itx, device) in disks.iter().enumerate() {
let checked_name = {
#[cfg(target_os = "windows")]
{
match &device.volume_name {
Some(volume_name) => Some(volume_name.as_str()),
None => device.name.split('/').last(),
}
}
#[cfg(not(target_os = "windows"))]
{
#[cfg(feature = "zfs")]
{
if !device.name.starts_with('/') {
Some(device.name.as_str()) // use the whole zfs
// dataset name
} else {
device.name.split('/').last()
}
}
#[cfg(not(feature = "zfs"))]
{
device.name.split('/').last()
}
}
};
if let Some(checked_name) = checked_name {
let io_device = {
#[cfg(target_os = "macos")]
{
use std::sync::OnceLock;
use regex::Regex;
// Must trim one level further for macOS!
static DISK_REGEX: OnceLock<Regex> = OnceLock::new();
if let Some(new_name) = DISK_REGEX
.get_or_init(|| Regex::new(r"disk\d+").unwrap())
.find(checked_name)
{
io.get(new_name.as_str())
} else {
None
}
}
#[cfg(not(target_os = "macos"))]
{
io.get(checked_name)
}
};
if let Some(io_device) = io_device {
let (io_r_pt, io_w_pt) = if let Some(io) = io_device {
(io.read_bytes, io.write_bytes)
} else {
(0, 0)
};
if self.io_labels.len() <= itx {
self.io_labels.push((String::default(), String::default()));
}
if self.io_labels_and_prev.len() <= itx {
self.io_labels_and_prev.push(((0, 0), (io_r_pt, io_w_pt)));
}
if let Some((io_curr, io_prev)) = self.io_labels_and_prev.get_mut(itx) {
let r_rate = ((io_r_pt.saturating_sub(io_prev.0)) as f64
/ time_since_last_harvest)
.round() as u64;
let w_rate = ((io_w_pt.saturating_sub(io_prev.1)) as f64
/ time_since_last_harvest)
.round() as u64;
*io_curr = (r_rate, w_rate);
*io_prev = (io_r_pt, io_w_pt);
if let Some(io_labels) = self.io_labels.get_mut(itx) {
let converted_read = get_decimal_bytes(r_rate);
let converted_write = get_decimal_bytes(w_rate);
*io_labels = (
if r_rate >= GIGA_LIMIT {
format!("{:.*}{}/s", 1, converted_read.0, converted_read.1)
} else {
format!("{:.*}{}/s", 0, converted_read.0, converted_read.1)
},
if w_rate >= GIGA_LIMIT {
format!("{:.*}{}/s", 1, converted_write.0, converted_write.1)
} else {
format!("{:.*}{}/s", 0, converted_write.0, converted_write.1)
},
);
}
}
} else {
if self.io_labels.len() <= itx {
self.io_labels.push((String::default(), String::default()));
}
if let Some(io_labels) = self.io_labels.get_mut(itx) {
*io_labels = ("N/A".to_string(), "N/A".to_string());
}
}
}
}
self.disk_harvest = disks;
self.io_harvest = io;
}
fn eat_proc(&mut self, list_of_processes: Vec<ProcessHarvest>) {
self.process_data.ingest(list_of_processes);
}
#[cfg(feature = "battery")]
fn eat_battery(&mut self, list_of_batteries: Vec<batteries::BatteryHarvest>) {
self.battery_harvest = list_of_batteries;
}
#[cfg(feature = "zfs")]
fn eat_arc(&mut self, arc: memory::MemHarvest, new_entry: &mut TimedData) {
new_entry.arc_data = arc.use_percent;
self.arc_harvest = arc;
}
#[cfg(feature = "gpu")]
fn eat_gpu(&mut self, gpu: Vec<(String, memory::MemHarvest)>, new_entry: &mut TimedData) {
// Note this only pre-calculates the data points - the names will be
// within the local copy of gpu_harvest. Since it's all sequential
// it probably doesn't matter anyways.
gpu.iter().for_each(|data| {
new_entry.gpu_data.push(data.1.use_percent);
});
self.gpu_harvest = gpu;
}
}

View File

@ -1,21 +1,32 @@
use regex::Regex;
/// Filters used by widgets to filter out certain entries.
/// TODO: Move this out maybe?
#[derive(Debug, Clone)]
pub struct Filter {
/// Whether the filter _accepts_ all entries that match `list`,
/// or _denies_ any entries that match it.
pub is_list_ignored: bool, // TODO: Maybe change to "ignore_matches"?
is_list_ignored: bool, // TODO: Maybe change to "ignore_matches"?
/// The list of regexes to match against. Whether it goes through
/// the filter or not depends on `is_list_ignored`.
pub list: Vec<regex::Regex>,
list: Vec<Regex>,
}
impl Filter {
/// Create a new filter.
#[inline]
pub(crate) fn new(ignore_matches: bool, list: Vec<Regex>) -> Self {
Self {
is_list_ignored: ignore_matches,
list,
}
}
/// Whether the filter should keep the entry or reject it.
#[inline]
pub(crate) fn keep_entry(&self, value: &str) -> bool {
if self.has_match(value) {
pub(crate) fn should_keep(&self, entry: &str) -> bool {
if self.has_match(entry) {
// If a match is found, then if we wanted to ignore if we match, return false.
// If we want to keep if we match, return true. Thus, return the
// inverse of `is_list_ignored`.
@ -30,6 +41,21 @@ impl Filter {
pub(crate) fn has_match(&self, value: &str) -> bool {
self.list.iter().any(|regex| regex.is_match(value))
}
/// Whether entries matching the list should be ignored or kept.
#[inline]
pub(crate) fn ignore_matches(&self) -> bool {
self.is_list_ignored
}
/// Check a filter if it exists, otherwise accept if it is [`None`].
#[inline]
pub(crate) fn optional_should_keep(filter: &Option<Self>, entry: &str) -> bool {
filter
.as_ref()
.map(|f| f.should_keep(entry))
.unwrap_or(true)
}
}
#[cfg(test)]
@ -56,7 +82,7 @@ mod test {
assert_eq!(
results
.into_iter()
.filter(|r| ignore_true.keep_entry(r))
.filter(|r| ignore_true.should_keep(r))
.collect::<Vec<_>>(),
vec!["wifi_0", "amd gpu"]
);
@ -69,7 +95,7 @@ mod test {
assert_eq!(
results
.into_iter()
.filter(|r| ignore_false.keep_entry(r))
.filter(|r| ignore_false.should_keep(r))
.collect::<Vec<_>>(),
vec!["CPU socket temperature", "motherboard temperature"]
);
@ -85,7 +111,7 @@ mod test {
assert_eq!(
results
.into_iter()
.filter(|r| multi_true.keep_entry(r))
.filter(|r| multi_true.should_keep(r))
.collect::<Vec<_>>(),
vec!["wifi_0", "amd gpu"]
);
@ -101,7 +127,7 @@ mod test {
assert_eq!(
results
.into_iter()
.filter(|r| multi_false.keep_entry(r))
.filter(|r| multi_false.should_keep(r))
.collect::<Vec<_>>(),
vec!["CPU socket temperature", "motherboard temperature"]
);

View File

@ -1,46 +0,0 @@
use super::DataCollection;
/// The [`FrozenState`] indicates whether the application state should be
/// frozen. It is either not frozen or frozen and containing a copy of the state
/// at the time.
pub enum FrozenState {
NotFrozen,
Frozen(Box<DataCollection>),
}
impl Default for FrozenState {
fn default() -> Self {
Self::NotFrozen
}
}
pub type IsFrozen = bool;
impl FrozenState {
/// Checks whether the [`FrozenState`] is currently frozen.
pub fn is_frozen(&self) -> IsFrozen {
matches!(self, FrozenState::Frozen(_))
}
/// Freezes the [`FrozenState`].
pub fn freeze(&mut self, data: Box<DataCollection>) {
*self = FrozenState::Frozen(data);
}
/// Unfreezes the [`FrozenState`].
pub fn thaw(&mut self) {
*self = FrozenState::NotFrozen;
}
/// Toggles the [`FrozenState`] and returns whether it is now frozen.
pub fn toggle(&mut self, data: &DataCollection) -> IsFrozen {
if self.is_frozen() {
self.thaw();
false
} else {
// Could we use an Arc instead? Is it worth it?
self.freeze(Box::new(data.clone()));
true
}
}
}

View File

@ -19,7 +19,7 @@ type ColumnMappings = (u32, BTreeMap<LineSegment, ColumnRowMappings>);
impl BottomLayout {
pub fn get_movement_mappings(&mut self) {
#[allow(clippy::suspicious_operation_groupings)] // Have to enable this, clippy really doesn't like me doing this with tuples...
#[expect(clippy::suspicious_operation_groupings)] // Have to enable this, clippy really doesn't like me doing this with tuples...
fn is_intersecting(a: LineSegment, b: LineSegment) -> bool {
a.0 >= b.0 && a.1 <= b.1
|| a.1 >= b.1 && a.0 <= b.0
@ -663,21 +663,20 @@ impl BottomLayout {
BottomLayout {
total_row_height_ratio: 3,
rows: vec![
BottomRow::new(vec![BottomCol::new(vec![
BottomColRow::new(vec![cpu]).canvas_handled()
BottomRow::new(vec![
BottomCol::new(vec![BottomColRow::new(vec![cpu]).canvas_handled()])
.canvas_handled(),
])
.canvas_handled()])
.canvas_handled(),
BottomRow::new(vec![BottomCol::new(vec![BottomColRow::new(vec![
mem, net,
BottomRow::new(vec![
BottomCol::new(vec![BottomColRow::new(vec![mem, net]).canvas_handled()])
.canvas_handled(),
])
.canvas_handled()])
.canvas_handled()])
.canvas_handled(),
BottomRow::new(vec![BottomCol::new(vec![
BottomColRow::new(vec![table]).canvas_handled()
BottomRow::new(vec![
BottomCol::new(vec![BottomColRow::new(vec![table]).canvas_handled()])
.canvas_handled(),
])
.canvas_handled()])
.canvas_handled(),
BottomRow::new(table_widgets).canvas_handled(),
],
@ -745,11 +744,6 @@ impl BottomRow {
self.constraint = IntermediaryConstraint::CanvasHandled { ratio: None };
self
}
pub fn grow(mut self, minimum: Option<u32>) -> Self {
self.constraint = IntermediaryConstraint::Grow { minimum };
self
}
}
/// Represents a single column in the layout. We assume that even if the column
@ -785,11 +779,6 @@ impl BottomCol {
self.constraint = IntermediaryConstraint::CanvasHandled { ratio: None };
self
}
pub fn grow(mut self, minimum: Option<u32>) -> Self {
self.constraint = IntermediaryConstraint::Grow { minimum };
self
}
}
#[derive(Clone, Default, Debug)]

View File

@ -6,11 +6,11 @@ use anyhow::bail;
use windows::Win32::{
Foundation::{CloseHandle, HANDLE},
System::Threading::{
OpenProcess, TerminateProcess, PROCESS_QUERY_INFORMATION, PROCESS_TERMINATE,
OpenProcess, PROCESS_QUERY_INFORMATION, PROCESS_TERMINATE, TerminateProcess,
},
};
use crate::data_collection::processes::Pid;
use crate::collection::processes::Pid;
/// Based from [this SO answer](https://stackoverflow.com/a/55231715).
#[cfg(target_os = "windows")]
@ -68,9 +68,11 @@ pub fn kill_process_given_pid(pid: Pid, signal: usize) -> anyhow::Result<()> {
let err_code = std::io::Error::last_os_error().raw_os_error();
let err = match err_code {
Some(libc::ESRCH) => "the target process did not exist.",
Some(libc::EPERM) => "the calling process does not have the permissions to terminate the target process(es).",
Some(libc::EPERM) => {
"the calling process does not have the permissions to terminate the target process(es)."
}
Some(libc::EINVAL) => "an invalid signal was specified.",
_ => "Unknown error occurred."
_ => "Unknown error occurred.",
};
if let Some(err_code) = err_code {

View File

@ -6,11 +6,11 @@ use unicode_ellipsis::grapheme_width;
use unicode_segmentation::{GraphemeCursor, GraphemeIncomplete, UnicodeSegmentation};
use crate::{
app::{layout_manager::BottomWidgetType, query::*},
app::layout_manager::BottomWidgetType,
constants,
widgets::{
BatteryWidgetState, CpuWidgetState, DiskTableWidget, MemWidgetState, NetWidgetState,
ProcWidgetState, TempWidgetState,
ProcWidgetState, TempWidgetState, query::ProcessQuery,
},
};
@ -21,7 +21,7 @@ pub struct AppWidgetStates {
pub proc_state: ProcState,
pub temp_state: TempState,
pub disk_state: DiskState,
pub battery_state: BatteryState,
pub battery_state: AppBatteryState,
pub basic_table_widget_state: Option<BasicTableWidgetState>,
}
@ -90,7 +90,7 @@ pub struct AppSearchState {
pub size_mappings: IndexMap<usize, Range<usize>>,
/// The query. TODO: Merge this as one enum.
pub query: Option<Query>,
pub query: Option<ProcessQuery>,
pub error_message: Option<String>,
}
@ -285,38 +285,22 @@ impl ProcState {
}
pub struct NetState {
pub force_update: Option<u64>,
pub widget_states: HashMap<u64, NetWidgetState>,
}
impl NetState {
pub fn init(widget_states: HashMap<u64, NetWidgetState>) -> Self {
NetState {
force_update: None,
widget_states,
}
}
pub fn get_mut_widget_state(&mut self, widget_id: u64) -> Option<&mut NetWidgetState> {
self.widget_states.get_mut(&widget_id)
}
pub fn get_widget_state(&self, widget_id: u64) -> Option<&NetWidgetState> {
self.widget_states.get(&widget_id)
NetState { widget_states }
}
}
pub struct CpuState {
pub force_update: Option<u64>,
pub widget_states: HashMap<u64, CpuWidgetState>,
}
impl CpuState {
pub fn init(widget_states: HashMap<u64, CpuWidgetState>) -> Self {
CpuState {
force_update: None,
widget_states,
}
CpuState { widget_states }
}
pub fn get_mut_widget_state(&mut self, widget_id: u64) -> Option<&mut CpuWidgetState> {
@ -329,24 +313,12 @@ impl CpuState {
}
pub struct MemState {
pub force_update: Option<u64>,
pub widget_states: HashMap<u64, MemWidgetState>,
}
impl MemState {
pub fn init(widget_states: HashMap<u64, MemWidgetState>) -> Self {
MemState {
force_update: None,
widget_states,
}
}
pub fn get_mut_widget_state(&mut self, widget_id: u64) -> Option<&mut MemWidgetState> {
self.widget_states.get_mut(&widget_id)
}
pub fn get_widget_state(&self, widget_id: u64) -> Option<&MemWidgetState> {
self.widget_states.get(&widget_id)
MemState { widget_states }
}
}
@ -391,29 +363,24 @@ pub struct BasicTableWidgetState {
// then we can expand outwards with a normal BasicTableState and a hashmap
pub currently_displayed_widget_type: BottomWidgetType,
pub currently_displayed_widget_id: u64,
pub widget_id: i64,
pub left_tlc: Option<(u16, u16)>,
pub left_brc: Option<(u16, u16)>,
pub right_tlc: Option<(u16, u16)>,
pub right_brc: Option<(u16, u16)>,
}
pub struct BatteryState {
pub struct AppBatteryState {
pub widget_states: HashMap<u64, BatteryWidgetState>,
}
impl BatteryState {
impl AppBatteryState {
pub fn init(widget_states: HashMap<u64, BatteryWidgetState>) -> Self {
BatteryState { widget_states }
AppBatteryState { widget_states }
}
pub fn get_mut_widget_state(&mut self, widget_id: u64) -> Option<&mut BatteryWidgetState> {
self.widget_states.get_mut(&widget_id)
}
pub fn get_widget_state(&self, widget_id: u64) -> Option<&BatteryWidgetState> {
self.widget_states.get(&widget_id)
}
}
#[derive(Default)]

11
src/bin/main.rs Normal file
View File

@ -0,0 +1,11 @@
use bottom::{reset_stdout, start_bottom};
fn main() -> anyhow::Result<()> {
let mut run_error_hook = false;
start_bottom(&mut run_error_hook).inspect_err(|_| {
if run_error_hook {
reset_stdout();
}
})
}

74
src/bin/schema.rs Normal file
View File

@ -0,0 +1,74 @@
#![cfg(feature = "generate_schema")]
use bottom::{options::config, widgets};
use clap::Parser;
use itertools::Itertools;
use strum::VariantArray;
#[derive(Parser)]
struct SchemaOptions {
/// The version of the schema.
version: Option<String>,
}
fn generate_schema(schema_options: SchemaOptions) -> anyhow::Result<()> {
let mut schema = schemars::schema_for!(config::Config);
{
// TODO: Maybe make this case insensitive? See https://stackoverflow.com/a/68639341
let proc_columns = schema.definitions.get_mut("ProcColumn").unwrap();
match proc_columns {
schemars::schema::Schema::Object(proc_columns) => {
let enums = proc_columns.enum_values.as_mut().unwrap();
*enums = widgets::ProcColumn::VARIANTS
.iter()
.flat_map(|var| var.get_schema_names())
.sorted()
.map(|v| serde_json::Value::String(v.to_string()))
.dedup()
.collect();
}
_ => anyhow::bail!("missing proc columns definition"),
}
let disk_columns = schema.definitions.get_mut("DiskColumn").unwrap();
match disk_columns {
schemars::schema::Schema::Object(disk_columns) => {
let enums = disk_columns.enum_values.as_mut().unwrap();
*enums = widgets::DiskColumn::VARIANTS
.iter()
.flat_map(|var| var.get_schema_names())
.sorted()
.map(|v| serde_json::Value::String(v.to_string()))
.dedup()
.collect();
}
_ => anyhow::bail!("missing disk columns definition"),
}
}
let metadata = schema.schema.metadata.as_mut().unwrap();
let version = schema_options.version.unwrap_or("nightly".to_string());
metadata.id = Some(format!(
"https://github.com/ClementTsang/bottom/blob/main/schema/{version}/bottom.json"
));
metadata.description = Some(format!(
"https://clementtsang.github.io/bottom/{}/configuration/config-file",
if version == "nightly" {
"nightly"
} else {
"stable"
}
));
metadata.title = Some(format!("Schema for bottom's config file ({version})",));
println!("{}", serde_json::to_string_pretty(&schema).unwrap());
Ok(())
}
fn main() -> anyhow::Result<()> {
let schema_options = SchemaOptions::parse();
generate_schema(schema_options)?;
Ok(())
}

View File

@ -5,25 +5,25 @@ mod widgets;
use itertools::izip;
use tui::{
Frame, Terminal,
backend::Backend,
layout::{Constraint, Direction, Layout, Rect},
text::Span,
widgets::Paragraph,
Frame, Terminal,
};
use crate::{
app::{
layout_manager::{BottomColRow, BottomLayout, BottomWidgetType, IntermediaryConstraint},
App,
layout_manager::{BottomColRow, BottomLayout, BottomWidgetType, IntermediaryConstraint},
},
constants::*,
options::config::style::ColourPalette,
options::config::style::Styles,
};
/// Handles the canvas' state.
pub struct Painter {
pub colours: ColourPalette,
pub styles: Styles,
previous_height: u16,
previous_width: u16,
@ -47,7 +47,7 @@ pub enum LayoutConstraint {
}
impl Painter {
pub fn init(layout: BottomLayout, styling: ColourPalette) -> anyhow::Result<Self> {
pub fn init(layout: BottomLayout, styling: Styles) -> anyhow::Result<Self> {
// Now for modularity; we have to also initialize the base layouts!
// We want to do this ONCE and reuse; after this we can just construct
// based on the console size.
@ -131,7 +131,7 @@ impl Painter {
});
let painter = Painter {
colours: styling,
styles: styling,
previous_height: 0,
previous_width: 0,
row_constraints,
@ -149,9 +149,9 @@ impl Painter {
pub fn get_border_style(&self, widget_id: u64, selected_widget_id: u64) -> tui::style::Style {
let is_on_widget = widget_id == selected_widget_id;
if is_on_widget {
self.colours.highlighted_border_style
self.styles.highlighted_border_style
} else {
self.colours.border_style
self.styles.border_style
}
}
@ -159,7 +159,7 @@ impl Painter {
f.render_widget(
Paragraph::new(Span::styled(
"Frozen, press 'f' to unfreeze",
self.colours.selected_text_style,
self.styles.selected_text_style,
)),
Layout::default()
.horizontal_margin(1)
@ -174,14 +174,14 @@ impl Painter {
use BottomWidgetType::*;
terminal.draw(|f| {
let (terminal_size, frozen_draw_loc) = if app_state.frozen_state.is_frozen() {
let (terminal_size, frozen_draw_loc) = if app_state.data_store.is_frozen() {
// TODO: Remove built-in cache?
let split_loc = Layout::default()
.constraints([Constraint::Min(0), Constraint::Length(1)])
.split(f.size());
.split(f.area());
(split_loc[0], Some(split_loc[1]))
} else {
(f.size(), None)
(f.area(), None)
};
let terminal_height = terminal_size.height;
let terminal_width = terminal_size.width;
@ -333,64 +333,61 @@ impl Painter {
_ => 0,
};
self.draw_process(f, app_state, rect[0], true, widget_id);
self.draw_process(f, app_state, rect[0], widget_id);
}
Battery =>
{
#[cfg(feature = "battery")]
self.draw_battery(f, app_state, rect[0], app_state.current_widget.widget_id)
}
Battery => self.draw_battery(
f,
app_state,
rect[0],
true,
app_state.current_widget.widget_id,
),
_ => {}
}
} else if app_state.app_config_fields.use_basic_mode {
// Basic mode. This basically removes all graphs but otherwise
// Basic mode. This basically removes all graphs but otherwise
// the same info.
if let Some(frozen_draw_loc) = frozen_draw_loc {
self.draw_frozen_indicator(f, frozen_draw_loc);
}
let actual_cpu_data_len = app_state.converted_data.cpu_data.len().saturating_sub(1);
let data = app_state.data_store.get_data();
let actual_cpu_data_len = data.cpu_harvest.len();
// This fixes #397, apparently if the height is 1, it can't render the CPU
// bars...
let cpu_height = {
let c =
(actual_cpu_data_len / 4) as u16 + u16::from(actual_cpu_data_len % 4 != 0);
let c = (actual_cpu_data_len / 4) as u16
+ u16::from(actual_cpu_data_len % 4 != 0)
+ u16::from(
app_state.app_config_fields.dedicated_average_row
&& actual_cpu_data_len.saturating_sub(1) % 4 != 0,
);
if c <= 1 {
1
} else {
c
}
if c <= 1 { 1 } else { c }
};
let mut mem_rows = 1;
if app_state.converted_data.swap_labels.is_some() {
if data.swap_harvest.is_some() {
mem_rows += 1; // add row for swap
}
#[cfg(feature = "zfs")]
{
if app_state.converted_data.arc_labels.is_some() {
if data.arc_harvest.is_some() {
mem_rows += 1; // add row for arc
}
}
#[cfg(not(target_os = "windows"))]
{
if app_state.converted_data.cache_labels.is_some() {
if data.cache_harvest.is_some() {
mem_rows += 1;
}
}
#[cfg(feature = "gpu")]
{
if let Some(gpu_data) = &app_state.converted_data.gpu_data {
mem_rows += gpu_data.len() as u16; // add row(s) for gpu
}
mem_rows += data.gpu_harvest.len() as u16; // add row(s) for gpu
}
if mem_rows == 1 {
@ -440,18 +437,16 @@ impl Painter {
ProcSort => 2,
_ => 0,
};
self.draw_process(f, app_state, vertical_chunks[3], false, wid);
self.draw_process(f, app_state, vertical_chunks[3], wid);
}
Temp => {
self.draw_temp_table(f, app_state, vertical_chunks[3], widget_id)
}
Battery => self.draw_battery(
f,
app_state,
vertical_chunks[3],
false,
widget_id,
),
Battery =>
{
#[cfg(feature = "battery")]
self.draw_battery(f, app_state, vertical_chunks[3], widget_id)
}
_ => {}
}
}
@ -725,8 +720,12 @@ impl Painter {
Net => self.draw_network(f, app_state, *draw_loc, widget.widget_id),
Temp => self.draw_temp_table(f, app_state, *draw_loc, widget.widget_id),
Disk => self.draw_disk_table(f, app_state, *draw_loc, widget.widget_id),
Proc => self.draw_process(f, app_state, *draw_loc, true, widget.widget_id),
Battery => self.draw_battery(f, app_state, *draw_loc, true, widget.widget_id),
Proc => self.draw_process(f, app_state, *draw_loc, widget.widget_id),
Battery =>
{
#[cfg(feature = "battery")]
self.draw_battery(f, app_state, *draw_loc, widget.widget_id)
}
_ => {}
}
}

View File

@ -1,8 +1,6 @@
//! Lower-level components used throughout bottom.
pub mod data_table;
pub mod pipe_gauge;
pub mod time_graph;
mod tui_widget;
pub mod widget_carousel;
pub use tui_widget::*;

View File

@ -29,6 +29,8 @@ use crate::utils::general::ClampExt;
/// - [`Sortable`]: This table expects sorted data, and there are helper
/// functions to facilitate things like sorting based on a selected column,
/// shortcut column selection support, mouse column selection support, etc.
///
/// FIXME: We already do all the text width checks - can we skip the underlying ones?
pub struct DataTable<DataType, Header, S = Unsortable, C = Column<Header>> {
pub columns: Vec<C>,
pub state: DataTableState,
@ -69,13 +71,13 @@ impl<DataType: DataToCell<H>, H: ColumnHeader, S: SortType, C: DataTableColumn<H
}
/// Sets the scroll position to the first value.
pub fn to_first(&mut self) {
pub fn scroll_to_first(&mut self) {
self.state.current_index = 0;
self.state.scroll_direction = ScrollDirection::Up;
}
/// Sets the scroll position to the last value.
pub fn to_last(&mut self) {
pub fn scroll_to_last(&mut self) {
self.state.current_index = self.data.len().saturating_sub(1);
self.state.scroll_direction = ScrollDirection::Down;
}
@ -126,7 +128,7 @@ impl<DataType: DataToCell<H>, H: ColumnHeader, S: SortType, C: DataTableColumn<H
}
/// Updates the scroll position to a selected index.
#[allow(clippy::comparison_chain)]
#[expect(clippy::comparison_chain)]
pub fn set_position(&mut self, new_index: usize) {
let new_index = new_index.clamp_upper(self.data.len().saturating_sub(1));
if self.state.current_index < new_index {
@ -197,11 +199,11 @@ mod test {
let mut table = DataTable::new(columns, props, styling);
table.set_data((0..=4).map(|index| TestType { index }).collect::<Vec<_>>());
table.to_last();
table.scroll_to_last();
assert_eq!(table.current_index(), 4);
assert_eq!(table.state.scroll_direction, ScrollDirection::Down);
table.to_first();
table.scroll_to_first();
assert_eq!(table.current_index(), 0);
assert_eq!(table.state.scroll_direction, ScrollDirection::Up);

View File

@ -62,8 +62,6 @@ pub trait DataTableColumn<H: ColumnHeader> {
fn is_hidden(&self) -> bool;
fn set_is_hidden(&mut self, is_hidden: bool);
/// The actually displayed "header".
fn header(&self) -> Cow<'static, str>;
@ -114,25 +112,12 @@ impl<H: ColumnHeader> DataTableColumn<H> for Column<H> {
self.is_hidden
}
#[inline]
fn set_is_hidden(&mut self, is_hidden: bool) {
self.is_hidden = is_hidden;
}
fn header(&self) -> Cow<'static, str> {
self.inner.text()
}
}
impl<H: ColumnHeader> Column<H> {
pub const fn new(inner: H) -> Self {
Self {
inner,
bounds: ColumnWidthBounds::FollowHeader,
is_hidden: false,
}
}
pub const fn hard(inner: H, width: u16) -> Self {
Self {
inner,

View File

@ -5,12 +5,11 @@ use std::{
use concat_string::concat_string;
use tui::{
Frame,
layout::{Constraint, Direction, Layout, Rect},
text::{Line, Span, Text},
widgets::{Block, Borders, Row, Table},
Frame,
widgets::{Block, Row, Table},
};
use unicode_segmentation::UnicodeSegmentation;
use super::{
CalculateColumnWidths, ColumnHeader, ColumnWidthBounds, DataTable, DataTableColumn, DataToCell,
@ -18,8 +17,8 @@ use super::{
};
use crate::{
app::layout_manager::BottomWidget,
canvas::Painter,
constants::{SIDE_BORDERS, TABLE_GAP_HEIGHT_LIMIT},
canvas::{Painter, drawing_utils::widget_block},
constants::TABLE_GAP_HEIGHT_LIMIT,
utils::strings::truncate_to_text,
};
@ -68,46 +67,41 @@ where
C: DataTableColumn<H>,
{
fn block<'a>(&self, draw_info: &'a DrawInfo, data_len: usize) -> Block<'a> {
let border_style = match draw_info.selection_state {
SelectionState::NotSelected => self.styling.border_style,
SelectionState::Selected | SelectionState::Expanded => {
self.styling.highlighted_border_style
}
let is_selected = match draw_info.selection_state {
SelectionState::NotSelected => false,
SelectionState::Selected | SelectionState::Expanded => true,
};
if !self.props.is_basic {
let block = Block::default()
.borders(Borders::ALL)
.border_style(border_style);
if let Some(title) = self.generate_title(draw_info, data_len) {
block.title(title)
} else {
block
}
} else if draw_info.is_on_widget() {
// Implies it is basic mode but selected.
Block::default()
.borders(SIDE_BORDERS)
.border_style(border_style)
let border_style = if is_selected {
self.styling.highlighted_border_style
} else {
Block::default().borders(Borders::NONE)
self.styling.border_style
};
let mut block = widget_block(self.props.is_basic, is_selected, self.styling.border_type)
.border_style(border_style);
if let Some((left_title, right_title)) = self.generate_title(draw_info, data_len) {
if !self.props.is_basic {
block = block.title_top(left_title);
}
if let Some(right_title) = right_title {
block = block.title_top(right_title);
}
}
block
}
/// Generates a title, given the available space.
pub fn generate_title<'a>(
&self, draw_info: &'a DrawInfo, total_items: usize,
) -> Option<Line<'a>> {
fn generate_title(
&self, draw_info: &'_ DrawInfo, total_items: usize,
) -> Option<(Line<'static>, Option<Line<'static>>)> {
self.props.title.as_ref().map(|title| {
let current_index = self.state.current_index.saturating_add(1);
let draw_loc = draw_info.loc;
let title_style = self.styling.title_style;
let border_style = if draw_info.is_on_widget() {
self.styling.highlighted_border_style
} else {
self.styling.border_style
};
let title = if self.props.show_table_scroll_position {
let pos = current_index.to_string();
@ -123,19 +117,15 @@ where
title.to_string()
};
if draw_info.is_expanded() {
let title_base = concat_string!(title, "── Esc to go back ");
let lines = "".repeat(usize::from(draw_loc.width).saturating_sub(
UnicodeSegmentation::graphemes(title_base.as_str(), true).count() + 2,
));
let esc = concat_string!("", lines, "─ Esc to go back ");
Line::from(vec![
Span::styled(title, title_style),
Span::styled(esc, border_style),
])
let left_title = Line::from(Span::styled(title, title_style)).left_aligned();
let right_title = if draw_info.is_expanded() {
Some(Line::from(" Esc to go back ").right_aligned())
} else {
Line::from(Span::styled(title, title_style))
}
None
};
(left_title, right_title)
})
}
@ -143,11 +133,10 @@ where
&mut self, f: &mut Frame<'_>, draw_info: &DrawInfo, widget: Option<&mut BottomWidget>,
painter: &Painter,
) {
let draw_horizontal = !self.props.is_basic || draw_info.is_on_widget();
let draw_loc = draw_info.loc;
let margined_draw_loc = Layout::default()
.constraints([Constraint::Percentage(100)])
.horizontal_margin(u16::from(!draw_horizontal))
.horizontal_margin(u16::from(self.props.is_basic && !draw_info.is_on_widget()))
.direction(Direction::Horizontal)
.split(draw_loc)[0];
@ -202,8 +191,9 @@ where
if !self.data.is_empty() || !self.first_draw {
if self.first_draw {
self.first_draw = false; // TODO: Doing it this way is fine, but it could be done better (e.g. showing
// custom no results/entries message)
// TODO: Doing it this way is fine, but it could be done better (e.g. showing
// custom no results/entries message)
self.first_draw = false;
if let Some(first_index) = self.first_index {
self.set_position(first_index);
}
@ -256,7 +246,7 @@ where
self.state.calculated_widths.iter().map(|nzu| nzu.get()),
)
.block(block)
.highlight_style(highlight_style)
.row_highlight_style(highlight_style)
.style(self.styling.text_style);
if show_header {

View File

@ -25,11 +25,16 @@ impl SortOrder {
SortOrder::Descending => SortOrder::Ascending,
}
}
/// A hack to get a const default.
pub const fn const_default() -> Self {
Self::Ascending
}
}
impl Default for SortOrder {
fn default() -> Self {
Self::Ascending
Self::const_default()
}
}
@ -163,11 +168,6 @@ where
self.is_hidden
}
#[inline]
fn set_is_hidden(&mut self, is_hidden: bool) {
self.is_hidden = is_hidden;
}
fn header(&self) -> Cow<'static, str> {
self.inner.header()
}
@ -195,18 +195,18 @@ where
/// Creates a new [`SortColumn`] with a hard width, which has no shortcut
/// and sorts by default in ascending order ([`SortOrder::Ascending`]).
pub fn hard(inner: T, width: u16) -> Self {
pub const fn hard(inner: T, width: u16) -> Self {
Self {
inner,
bounds: ColumnWidthBounds::Hard(width),
is_hidden: false,
default_order: SortOrder::default(),
default_order: SortOrder::const_default(),
}
}
/// Creates a new [`SortColumn`] with a soft width, which has no shortcut
/// and sorts by default in ascending order ([`SortOrder::Ascending`]).
pub fn soft(inner: T, max_percentage: Option<f32>) -> Self {
pub const fn soft(inner: T, max_percentage: Option<f32>) -> Self {
Self {
inner,
bounds: ColumnWidthBounds::Soft {
@ -214,18 +214,12 @@ where
max_percentage,
},
is_hidden: false,
default_order: SortOrder::default(),
default_order: SortOrder::const_default(),
}
}
/// Sets the default sort order to [`SortOrder::Ascending`].
pub fn default_ascending(mut self) -> Self {
self.default_order = SortOrder::Ascending;
self
}
/// Sets the default sort order to [`SortOrder::Descending`].
pub fn default_descending(mut self) -> Self {
pub const fn default_descending(mut self) -> Self {
self.default_order = SortOrder::Descending;
self
}

View File

@ -1,11 +1,12 @@
use tui::style::Style;
use tui::{style::Style, widgets::BorderType};
use crate::options::config::style::ColourPalette;
use crate::options::config::style::Styles;
#[derive(Default)]
pub struct DataTableStyling {
pub header_style: Style,
pub border_style: Style,
pub border_type: BorderType,
pub highlighted_border_style: Style,
pub text_style: Style,
pub highlighted_text_style: Style,
@ -13,14 +14,15 @@ pub struct DataTableStyling {
}
impl DataTableStyling {
pub fn from_palette(colours: &ColourPalette) -> Self {
pub fn from_palette(styles: &Styles) -> Self {
Self {
header_style: colours.table_header_style,
border_style: colours.border_style,
highlighted_border_style: colours.highlighted_border_style,
text_style: colours.text_style,
highlighted_text_style: colours.selected_text_style,
title_style: colours.widget_title_style,
header_style: styles.table_header_style,
border_style: styles.border_style,
border_type: styles.border_type,
highlighted_border_style: styles.highlighted_border_style,
text_style: styles.text_style,
highlighted_text_style: styles.selected_text_style,
title_style: styles.widget_title_style,
}
}
}

View File

@ -9,6 +9,7 @@ use tui::{
#[derive(Debug, Clone, Copy)]
pub enum LabelLimit {
None,
#[expect(dead_code)]
Auto(u16),
Bars,
StartLabel,
@ -32,7 +33,7 @@ pub struct PipeGauge<'a> {
hide_parts: LabelLimit,
}
impl<'a> Default for PipeGauge<'a> {
impl Default for PipeGauge<'_> {
fn default() -> Self {
Self {
block: None,
@ -95,7 +96,7 @@ impl<'a> PipeGauge<'a> {
}
}
impl<'a> Widget for PipeGauge<'a> {
impl Widget for PipeGauge<'_> {
fn render(mut self, area: Rect, buf: &mut Buffer) {
buf.set_style(area, self.label_style);
let gauge_area = match self.block.take() {
@ -203,13 +204,15 @@ impl<'a> Widget for PipeGauge<'a> {
let pipe_end =
start + (f64::from(end.saturating_sub(start)) * self.ratio).floor() as u16;
for col in start..pipe_end {
buf.get_mut(col, row).set_symbol("|").set_style(Style {
fg: self.gauge_style.fg,
bg: None,
add_modifier: self.gauge_style.add_modifier,
sub_modifier: self.gauge_style.sub_modifier,
underline_color: None,
});
if let Some(cell) = buf.cell_mut((col, row)) {
cell.set_symbol("|").set_style(Style {
fg: self.gauge_style.fg,
bg: None,
add_modifier: self.gauge_style.add_modifier,
sub_modifier: self.gauge_style.sub_modifier,
underline_color: None,
});
}
}
if (end_label.width() as u16) < end.saturating_sub(start) {

View File

@ -1,37 +1,61 @@
use std::borrow::Cow;
mod time_chart;
use std::{borrow::Cow, time::Instant};
use concat_string::concat_string;
pub use time_chart::*;
use tui::{
Frame,
layout::{Constraint, Rect},
style::Style,
symbols::Marker,
text::{Line, Span},
widgets::{Block, Borders, GraphType},
Frame,
widgets::{BorderType, GraphType},
};
use unicode_segmentation::UnicodeSegmentation;
use super::time_chart::{
Axis, Dataset, LegendPosition, Point, TimeChart, DEFAULT_LEGEND_CONSTRAINTS,
};
use crate::{app::data::Values, canvas::drawing_utils::widget_block};
/// Represents the data required by the [`TimeGraph`].
pub struct GraphData<'a> {
pub points: &'a [Point],
pub style: Style,
pub name: Option<Cow<'a, str>>,
///
/// TODO: We may be able to get rid of this intermediary data structure.
#[derive(Default)]
pub(crate) struct GraphData<'a> {
time: &'a [Instant],
values: Option<&'a Values>,
style: Style,
name: Option<Cow<'a, str>>,
}
impl<'a> GraphData<'a> {
pub fn time(mut self, time: &'a [Instant]) -> Self {
self.time = time;
self
}
pub fn values(mut self, values: &'a Values) -> Self {
self.values = Some(values);
self
}
pub fn style(mut self, style: Style) -> Self {
self.style = style;
self
}
pub fn name(mut self, name: Cow<'a, str>) -> Self {
self.name = Some(name);
self
}
}
pub struct TimeGraph<'a> {
/// The min and max x boundaries. Expects a f64 representing the time range
/// in milliseconds.
pub x_bounds: [u64; 2],
/// The min x value.
pub x_min: f64,
/// Whether to hide the time/x-labels.
pub hide_x_labels: bool,
/// The min and max y boundaries.
pub y_bounds: [f64; 2],
pub y_bounds: AxisBound,
/// Any y-labels.
pub y_labels: &'a [Cow<'a, str>],
@ -42,9 +66,15 @@ pub struct TimeGraph<'a> {
/// The border style.
pub border_style: Style,
/// The border type.
pub border_type: BorderType,
/// The graph title.
pub title: Cow<'a, str>,
/// Whether this graph is selected.
pub is_selected: bool,
/// Whether this graph is expanded.
pub is_expanded: bool,
@ -60,24 +90,26 @@ pub struct TimeGraph<'a> {
/// The marker type. Unlike ratatui's native charts, we assume
/// only a single type of marker.
pub marker: Marker,
/// The chart scaling.
pub scaling: ChartScaling,
}
impl<'a> TimeGraph<'a> {
impl TimeGraph<'_> {
/// Generates the [`Axis`] for the x-axis.
fn generate_x_axis(&self) -> Axis<'_> {
// Due to how we display things, we need to adjust the time bound values.
let time_start = -(self.x_bounds[1] as f64);
let adjusted_x_bounds = [time_start, 0.0];
let adjusted_x_bounds = AxisBound::Min(self.x_min);
if self.hide_x_labels {
Axis::default().bounds(adjusted_x_bounds)
} else {
let xb_one = (self.x_bounds[1] / 1000).to_string();
let xb_zero = (self.x_bounds[0] / 1000).to_string();
let x_bound_left = ((-self.x_min) as u64 / 1000).to_string();
let x_bound_right = "0s";
let x_labels = vec![
Span::styled(concat_string!(xb_one, "s"), self.graph_style),
Span::styled(concat_string!(xb_zero, "s"), self.graph_style),
Span::styled(concat_string!(x_bound_left, "s"), self.graph_style),
Span::styled(x_bound_right, self.graph_style),
];
Axis::default()
@ -100,29 +132,6 @@ impl<'a> TimeGraph<'a> {
)
}
/// Generates a title for the [`TimeGraph`] widget, given the available
/// space.
fn generate_title(&self, draw_loc: Rect) -> Line<'_> {
if self.is_expanded {
let title_base = concat_string!(self.title, "── Esc to go back ");
Line::from(vec![
Span::styled(self.title.as_ref(), self.title_style),
Span::styled(
concat_string!(
"",
"".repeat(usize::from(draw_loc.width).saturating_sub(
UnicodeSegmentation::graphemes(title_base.as_str(), true).count() + 2
)),
"─ Esc to go back "
),
self.border_style,
),
])
} else {
Line::from(Span::styled(self.title.as_ref(), self.title_style))
}
}
/// Draws a time graph at [`Rect`] location provided by `draw_loc`. A time
/// graph is used to display data points throughout time in the x-axis.
///
@ -132,17 +141,26 @@ impl<'a> TimeGraph<'a> {
/// graph.
/// - Expects `graph_data`, which represents *what* data to draw, and
/// various details like style and optional legends.
pub fn draw_time_graph(&self, f: &mut Frame<'_>, draw_loc: Rect, graph_data: &[GraphData<'_>]) {
pub fn draw_time_graph(
&self, f: &mut Frame<'_>, draw_loc: Rect, graph_data: Vec<GraphData<'_>>,
) {
// TODO: (points_rework_v1) can we reduce allocations in the underlying graph by saving some sort of state?
let x_axis = self.generate_x_axis();
let y_axis = self.generate_y_axis();
let data = graph_data.into_iter().map(create_dataset).collect();
// This is some ugly manual loop unswitching. Maybe unnecessary.
// TODO: Optimize this step. Cut out unneeded points.
let data = graph_data.iter().map(create_dataset).collect();
let block = Block::default()
.title(self.generate_title(draw_loc))
.borders(Borders::ALL)
.border_style(self.border_style);
let block = {
let mut b = widget_block(false, self.is_selected, self.border_type)
.border_style(self.border_style)
.title_top(Line::styled(self.title.as_ref(), self.title_style));
if self.is_expanded {
b = b.title_top(Line::styled(" Esc to go back ", self.title_style).right_aligned())
}
b
};
f.render_widget(
TimeChart::new(data)
@ -155,30 +173,38 @@ impl<'a> TimeGraph<'a> {
.hidden_legend_constraints(
self.legend_constraints
.unwrap_or(DEFAULT_LEGEND_CONSTRAINTS),
),
)
.scaling(self.scaling),
draw_loc,
)
}
}
/// Creates a new [`Dataset`].
fn create_dataset<'a>(data: &'a GraphData<'a>) -> Dataset<'a> {
fn create_dataset(data: GraphData<'_>) -> Dataset<'_> {
let GraphData {
points,
time,
values,
style,
name,
} = data;
let Some(values) = values else {
return Dataset::default();
};
let dataset = Dataset::default()
.style(*style)
.data(points)
.style(style)
.data(time, values)
.graph_type(GraphType::Line);
if let Some(name) = name {
dataset.name(name.as_ref())
let dataset = if let Some(name) = name {
dataset.name(name)
} else {
dataset
}
};
dataset
}
#[cfg(test)]
@ -186,14 +212,14 @@ mod test {
use std::borrow::Cow;
use tui::{
layout::Rect,
style::{Color, Style},
symbols::Marker,
text::{Line, Span},
text::Span,
widgets::BorderType,
};
use super::TimeGraph;
use crate::canvas::components::time_chart::Axis;
use super::{AxisBound, ChartScaling, TimeGraph};
use crate::canvas::components::time_graph::Axis;
const Y_LABELS: [Cow<'static, str>; 3] = [
Cow::Borrowed("0%"),
@ -204,17 +230,20 @@ mod test {
fn create_time_graph() -> TimeGraph<'static> {
TimeGraph {
title: " Network ".into(),
x_bounds: [0, 15000],
x_min: -15000.0,
hide_x_labels: false,
y_bounds: [0.0, 100.5],
y_bounds: AxisBound::Max(100.5),
y_labels: &Y_LABELS,
graph_style: Style::default().fg(Color::Red),
border_style: Style::default().fg(Color::Blue),
border_type: BorderType::Plain,
is_selected: false,
is_expanded: false,
title_style: Style::default().fg(Color::Cyan),
legend_position: None,
legend_constraints: None,
marker: Marker::Braille,
scaling: ChartScaling::Linear,
}
}
@ -225,7 +254,7 @@ mod test {
let x_axis = tg.generate_x_axis();
let actual = Axis::default()
.bounds([-15000.0, 0.0])
.bounds(AxisBound::Min(-15000.0))
.labels(vec![Span::styled("15s", style), Span::styled("0s", style)])
.style(style);
assert_eq!(x_axis.bounds, actual.bounds);
@ -240,7 +269,7 @@ mod test {
let y_axis = tg.generate_y_axis();
let actual = Axis::default()
.bounds([0.0, 100.5])
.bounds(AxisBound::Max(100.5))
.labels(vec![
Span::styled("0%", style),
Span::styled("50%", style),
@ -252,26 +281,4 @@ mod test {
assert_eq!(y_axis.labels, actual.labels);
assert_eq!(y_axis.style, actual.style);
}
#[test]
fn time_graph_gen_title() {
let mut time_graph = create_time_graph();
let draw_loc = Rect::new(0, 0, 32, 100);
let title = time_graph.generate_title(draw_loc);
assert_eq!(
title,
Line::from(Span::styled(" Network ", Style::default().fg(Color::Cyan)))
);
time_graph.is_expanded = true;
let title = time_graph.generate_title(draw_loc);
assert_eq!(
title,
Line::from(vec![
Span::styled(" Network ", Style::default().fg(Color::Cyan)),
Span::styled("───── Esc to go back ", Style::default().fg(Color::Blue))
])
);
}
}

View File

@ -7,7 +7,7 @@
mod canvas;
mod points;
use std::{cmp::max, str::FromStr};
use std::{cmp::max, str::FromStr, time::Instant};
use canvas::*;
use tui::{
@ -16,16 +16,44 @@ use tui::{
style::{Color, Style, Styled},
symbols::{self, Marker},
text::{Line, Span},
widgets::{block::BlockExt, Block, Borders, GraphType, Widget},
widgets::{Block, Borders, GraphType, Widget, block::BlockExt},
};
use unicode_width::UnicodeWidthStr;
use crate::{
app::data::Values,
utils::general::{saturating_log2, saturating_log10},
};
pub const DEFAULT_LEGEND_CONSTRAINTS: (Constraint, Constraint) =
(Constraint::Ratio(1, 4), Constraint::Length(4));
/// A single graph point.
pub type Point = (f64, f64);
/// An axis bound type. Allows us to save a f64 since we know that we are
/// usually bound from some values [0.0, a], or [-b, 0.0].
#[derive(Debug, Default, Clone, Copy, PartialEq)]
pub enum AxisBound {
/// Just 0.
#[default]
Zero,
/// Bound by a minimum value to 0.
Min(f64),
/// Bound by 0 and a max value.
Max(f64),
}
impl AxisBound {
fn get_bounds(&self) -> [f64; 2] {
match self {
AxisBound::Zero => [0.0, 0.0],
AxisBound::Min(min) => [*min, 0.0],
AxisBound::Max(max) => [0.0, *max],
}
}
}
/// An X or Y axis for the [`TimeChart`] widget
#[derive(Debug, Default, Clone, PartialEq)]
pub struct Axis<'a> {
@ -33,7 +61,7 @@ pub struct Axis<'a> {
pub(crate) title: Option<Line<'a>>,
/// Bounds for the axis (all data points outside these limits will not be
/// represented)
pub(crate) bounds: [f64; 2],
pub(crate) bounds: AxisBound,
/// A list of labels to put to the left or below the axis
pub(crate) labels: Option<Vec<Span<'a>>>,
/// The style used to draw the axis itself
@ -47,10 +75,8 @@ impl<'a> Axis<'a> {
///
/// It will be displayed at the end of the axis. For an X axis this is the
/// right, for a Y axis, this is the top.
///
/// This is a fluent setter method which must be chained or used as it
/// consumes self
#[must_use = "method moves the value of self and returns the modified value"]
#[cfg_attr(not(test), expect(dead_code))]
pub fn title<T>(mut self, title: T) -> Axis<'a>
where
T: Into<Line<'a>>,
@ -59,14 +85,9 @@ impl<'a> Axis<'a> {
self
}
/// Sets the bounds of this axis
///
/// In other words, sets the min and max value on this axis.
///
/// This is a fluent setter method which must be chained or used as it
/// consumes self
/// Sets the bounds of this axis.
#[must_use = "method moves the value of self and returns the modified value"]
pub fn bounds(mut self, bounds: [f64; 2]) -> Axis<'a> {
pub fn bounds(mut self, bounds: AxisBound) -> Axis<'a> {
self.bounds = bounds;
self
}
@ -96,6 +117,7 @@ impl<'a> Axis<'a> {
///
/// On the X axis, this parameter only affects the first label.
#[must_use = "method moves the value of self and returns the modified value"]
#[expect(dead_code)]
pub fn labels_alignment(mut self, alignment: Alignment) -> Axis<'a> {
self.labels_alignment = alignment;
self
@ -223,7 +245,7 @@ impl FromStr for LegendPosition {
type Err = ParseLegendPositionError;
fn from_str(s: &str) -> Result<Self, Self::Err> {
match s.to_ascii_lowercase().as_str() {
match s {
"top" => Ok(Self::Top),
"top-left" => Ok(Self::TopLeft),
"top-right" => Ok(Self::TopRight),
@ -237,23 +259,28 @@ impl FromStr for LegendPosition {
}
}
#[derive(Debug, Default, Clone)]
enum Data<'a> {
Some {
times: &'a [Instant],
values: &'a Values,
},
#[default]
None,
}
/// A group of data points
///
/// This is the main element composing a [`TimeChart`].
///
/// A dataset can be [named](Dataset::name). Only named datasets will be
/// rendered in the legend.
///
/// After that, you can pass it data with [`Dataset::data`]. Data is an array of
/// `f64` tuples (`(f64, f64)`), the first element being X and the second Y.
/// It's also worth noting that, unlike the [`Rect`], here the Y axis is bottom
/// to top, as in math.
#[derive(Debug, Default, Clone, PartialEq)]
#[derive(Debug, Default, Clone)]
pub struct Dataset<'a> {
/// Name of the dataset (used in the legend if shown)
name: Option<Line<'a>>,
/// A reference to the actual data
data: &'a [(f64, f64)],
/// A reference to data.
data: Data<'a>,
/// Symbol used for each points of this dataset
marker: symbols::Marker,
/// Determines graph type used for drawing points
@ -282,8 +309,8 @@ impl<'a> Dataset<'a> {
/// element being X and the second Y. It's also worth noting that,
/// unlike the [`Rect`], here the Y axis is bottom to top, as in math.
#[must_use = "method moves the value of self and returns the modified value"]
pub fn data(mut self, data: &'a [(f64, f64)]) -> Dataset<'a> {
self.data = data;
pub fn data(mut self, times: &'a [Instant], values: &'a Values) -> Dataset<'a> {
self.data = Data::Some { times, values };
self
}
@ -295,10 +322,8 @@ impl<'a> Dataset<'a> {
///
/// Note [`Marker::Braille`] requires a font that supports Unicode Braille
/// Patterns.
///
/// This is a fluent setter method which must be chained or used as it
/// consumes self
#[must_use = "method moves the value of self and returns the modified value"]
#[expect(dead_code)]
pub fn marker(mut self, marker: symbols::Marker) -> Dataset<'a> {
self.marker = marker;
self
@ -354,6 +379,28 @@ struct ChartLayout {
graph_area: Rect,
}
/// Whether to additionally scale all values before displaying them. Defaults to none.
#[derive(Default, Debug, Clone, Copy)]
pub(crate) enum ChartScaling {
#[default]
Linear,
Log10,
Log2,
}
impl ChartScaling {
/// Scale a value.
pub(super) fn scale(&self, value: f64) -> f64 {
// Remember to do saturating log checks as otherwise 0.0 becomes inf, and you get
// gaps!
match self {
ChartScaling::Linear => value,
ChartScaling::Log10 => saturating_log10(value),
ChartScaling::Log2 => saturating_log2(value),
}
}
}
/// A "custom" chart, just a slightly tweaked [`tui::widgets::Chart`] from
/// ratatui, but with greater control over the legend, and built with the idea
/// of drawing data points relative to a time-based x-axis.
@ -364,7 +411,7 @@ struct ChartLayout {
/// - Automatic interpolation to points that fall *just* outside of the screen.
///
/// TODO: Support for putting the legend on the left side.
#[derive(Debug, Default, Clone, PartialEq)]
#[derive(Debug, Default, Clone)]
pub struct TimeChart<'a> {
/// A block to display around the widget eventually
block: Option<Block<'a>>,
@ -380,17 +427,17 @@ pub struct TimeChart<'a> {
legend_style: Style,
/// Constraints used to determine whether the legend should be shown or not
hidden_legend_constraints: (Constraint, Constraint),
/// The position detnermine where the legenth is shown or hide regaurdless
/// The position determining whether the length is shown or hidden, regardless
/// of `hidden_legend_constraints`
legend_position: Option<LegendPosition>,
/// The marker type.
marker: Marker,
/// Whether to scale the values differently.
scaling: ChartScaling,
}
impl<'a> TimeChart<'a> {
/// Creates a chart with the given [datasets](Dataset)
///
/// A chart can render multiple datasets.
/// Creates a chart with the given [datasets](Dataset).
pub fn new(datasets: Vec<Dataset<'a>>) -> TimeChart<'a> {
TimeChart {
block: None,
@ -402,6 +449,7 @@ impl<'a> TimeChart<'a> {
hidden_legend_constraints: (Constraint::Ratio(1, 4), Constraint::Ratio(1, 4)),
legend_position: Some(LegendPosition::default()),
marker: Marker::Braille,
scaling: ChartScaling::default(),
}
}
@ -475,7 +523,7 @@ impl<'a> TimeChart<'a> {
self
}
/// Sets the position of a legend or hide it
/// Sets the position of a legend or hide it.
///
/// The default is [`LegendPosition::TopRight`].
///
@ -493,6 +541,13 @@ impl<'a> TimeChart<'a> {
self
}
/// Set chart scaling.
#[must_use = "method moves the value of self and returns the modified value"]
pub fn scaling(mut self, scaling: ChartScaling) -> TimeChart<'a> {
self.scaling = scaling;
self
}
/// Compute the internal layout of the chart given the area. If the area is
/// too small some elements may be automatically hidden
fn layout(&self, area: Rect) -> ChartLayout {
@ -692,6 +747,8 @@ impl<'a> TimeChart<'a> {
fn render_y_labels(
&self, buf: &mut Buffer, layout: &ChartLayout, chart_area: Rect, graph_area: Rect,
) {
// FIXME: Control how many y-axis labels are rendered based on height.
let Some(x) = layout.label_y else { return };
let labels = self.y_axis.labels.as_ref().unwrap();
let labels_len = labels.len() as u16;
@ -722,8 +779,11 @@ impl Widget for TimeChart<'_> {
// Sample the style of the entire widget. This sample will be used to reset the
// style of the cells that are part of the components put on top of the
// grah area (i.e legend and axis names).
let original_style = buf.get(area.left(), area.top()).style();
// graph area (i.e legend and axis names).
let Some(original_style) = buf.cell((area.left(), area.top())).map(|cell| cell.style())
else {
return;
};
let layout = self.layout(chart_area);
let graph_area = layout.graph_area;
@ -736,32 +796,38 @@ impl Widget for TimeChart<'_> {
if let Some(y) = layout.axis_x {
for x in graph_area.left()..graph_area.right() {
buf.get_mut(x, y)
.set_symbol(symbols::line::HORIZONTAL)
.set_style(self.x_axis.style);
if let Some(cell) = buf.cell_mut((x, y)) {
cell.set_symbol(symbols::line::HORIZONTAL)
.set_style(self.x_axis.style);
}
}
}
if let Some(x) = layout.axis_y {
for y in graph_area.top()..graph_area.bottom() {
buf.get_mut(x, y)
.set_symbol(symbols::line::VERTICAL)
.set_style(self.y_axis.style);
if let Some(cell) = buf.cell_mut((x, y)) {
cell.set_symbol(symbols::line::VERTICAL)
.set_style(self.y_axis.style);
}
}
}
if let Some(y) = layout.axis_x {
if let Some(x) = layout.axis_y {
buf.get_mut(x, y)
.set_symbol(symbols::line::BOTTOM_LEFT)
.set_style(self.x_axis.style);
if let Some(cell) = buf.cell_mut((x, y)) {
cell.set_symbol(symbols::line::BOTTOM_LEFT)
.set_style(self.x_axis.style);
}
}
}
let x_bounds = self.x_axis.bounds.get_bounds();
let y_bounds = self.y_axis.bounds.get_bounds();
Canvas::default()
.background_color(self.style.bg.unwrap_or(Color::Reset))
.x_bounds(self.x_axis.bounds)
.y_bounds(self.y_axis.bounds)
.x_bounds(x_bounds)
.y_bounds(y_bounds)
.marker(self.marker)
.paint(|ctx| {
self.draw_points(ctx);
@ -806,10 +872,15 @@ impl Widget for TimeChart<'_> {
if let Some(legend_area) = layout.legend_area {
buf.set_style(legend_area, original_style);
Block::default()
let block = Block::default()
.borders(Borders::ALL)
.border_style(self.legend_style)
.render(legend_area, buf);
.border_style(self.legend_style);
for pos in block.inner(legend_area).positions() {
if let Some(cell) = buf.cell_mut(pos) {
cell.set_symbol(" ");
}
}
block.render(legend_area, buf);
for (i, (dataset_name, dataset_style)) in self
.datasets
@ -892,7 +963,7 @@ mod tests {
.iter()
.enumerate()
.map(|(i, (x, y, cell))| {
let expected_cell = expected.get(*x, *y);
let expected_cell = expected.cell((*x, *y)).unwrap();
indoc::formatdoc! {"
{i}: at ({x}, {y})
expected: {expected_cell:?}
@ -921,6 +992,8 @@ mod tests {
};
}
use std::time::Duration;
use tui::style::{Modifier, Stylize};
use super::*;
@ -933,7 +1006,17 @@ mod tests {
#[test]
fn it_should_hide_the_legend() {
let data = [(0.0, 5.0), (1.0, 6.0), (3.0, 7.0)];
let now = Instant::now();
let times = [
now,
now.checked_add(Duration::from_secs(1)).unwrap(),
now.checked_add(Duration::from_secs(2)).unwrap(),
];
let mut values = Values::default();
values.push(5.0);
values.push(6.0);
values.push(7.0);
let cases = [
LegendTestCase {
chart_area: Rect::new(0, 0, 100, 100),
@ -950,7 +1033,7 @@ mod tests {
let datasets = (0..10)
.map(|i| {
let name = format!("Dataset #{i}");
Dataset::default().name(name).data(&data)
Dataset::default().name(name).data(&times, &values)
})
.collect::<Vec<_>>();
let chart = TimeChart::new(datasets)
@ -1038,7 +1121,7 @@ mod tests {
assert!(layout.legend_area.is_some());
assert_eq!(layout.legend_area.unwrap().height, 4); // 2 for borders, 2
// for rows
// for rows
}
#[test]

View File

@ -22,8 +22,8 @@ use tui::{
symbols,
text::Line,
widgets::{
canvas::{Line as CanvasLine, Points},
Block, Widget,
canvas::{Line as CanvasLine, Points},
},
};
@ -154,7 +154,7 @@ trait Grid: Debug {
struct BrailleGrid {
width: u16,
height: u16,
cells: Vec<u16>,
cells: Vec<u16>, // FIXME: (points_rework_v1) isn't this really inefficient to go u16 -> String from utf16?
colors: Vec<Color>,
}
@ -171,14 +171,6 @@ impl BrailleGrid {
}
impl Grid for BrailleGrid {
// fn width(&self) -> u16 {
// self.width
// }
// fn height(&self) -> u16 {
// self.height
// }
fn resolution(&self) -> (f64, f64) {
(
f64::from(self.width) * 2.0 - 1.0,
@ -242,14 +234,6 @@ impl CharGrid {
}
impl Grid for CharGrid {
// fn width(&self) -> u16 {
// self.width
// }
// fn height(&self) -> u16 {
// self.height
// }
fn resolution(&self) -> (f64, f64) {
(f64::from(self.width) - 1.0, f64::from(self.height) - 1.0)
}
@ -325,14 +309,6 @@ impl HalfBlockGrid {
}
impl Grid for HalfBlockGrid {
// fn width(&self) -> u16 {
// self.width
// }
// fn height(&self) -> u16 {
// self.height
// }
fn resolution(&self) -> (f64, f64) {
(f64::from(self.width), f64::from(self.height) * 2.0)
}
@ -362,8 +338,9 @@ impl Grid for HalfBlockGrid {
// Note we implement this slightly differently to what is done in ratatui's
// repo, since their version doesn't seem to compile for me...
//
// TODO: Whenever I add this as a valid marker, make sure this works fine with
// the overriden time_chart drawing-layer-thing.
// the overridden time_chart drawing-layer-thing.
// Join the upper and lower rows, and emit a tuple vector of strings to print,
// and their colours.
@ -400,29 +377,8 @@ impl Grid for HalfBlockGrid {
}
}
impl<'a, 'b> Painter<'a, 'b> {
/// Convert the (x, y) coordinates to location of a point on the grid
///
/// # Examples:
/// ```
/// use tui::{
/// symbols,
/// widgets::canvas::{Context, Painter},
/// };
///
/// let mut ctx = Context::new(2, 2, [1.0, 2.0], [0.0, 2.0], symbols::Marker::Braille);
/// let mut painter = Painter::from(&mut ctx);
/// let point = painter.get_point(1.0, 0.0);
/// assert_eq!(point, Some((0, 7)));
/// let point = painter.get_point(1.5, 1.0);
/// assert_eq!(point, Some((1, 3)));
/// let point = painter.get_point(0.0, 0.0);
/// assert_eq!(point, None);
/// let point = painter.get_point(2.0, 2.0);
/// assert_eq!(point, Some((3, 0)));
/// let point = painter.get_point(1.0, 2.0);
/// assert_eq!(point, Some((0, 0)));
/// ```
impl Painter<'_, '_> {
/// Convert the (x, y) coordinates to location of a point on the grid.
pub fn get_point(&self, x: f64, y: f64) -> Option<(usize, usize)> {
let left = self.context.x_bounds[0];
let right = self.context.x_bounds[1];
@ -441,20 +397,7 @@ impl<'a, 'b> Painter<'a, 'b> {
Some((x, y))
}
/// Paint a point of the grid
///
/// # Examples:
/// ```
/// use tui::{
/// style::Color,
/// symbols,
/// widgets::canvas::{Context, Painter},
/// };
///
/// let mut ctx = Context::new(1, 1, [0.0, 2.0], [0.0, 2.0], symbols::Marker::Braille);
/// let mut painter = Painter::from(&mut ctx);
/// let cell = painter.paint(1, 3, Color::Red);
/// ```
/// Paint a point of the grid.
pub fn paint(&mut self, x: usize, y: usize, color: Color) {
self.context.grid.paint(x, y, color);
}
@ -570,31 +513,13 @@ where
/// braille patterns are used as they provide a more fine grained result
/// but you might want to use the simple dot or block instead if the
/// targeted terminal does not support those symbols.
///
/// # Examples
///
/// ```
/// # use tui::widgets::canvas::Canvas;
/// # use tui::symbols;
/// Canvas::default()
/// .marker(symbols::Marker::Braille)
/// .paint(|ctx| {});
///
/// Canvas::default()
/// .marker(symbols::Marker::Dot)
/// .paint(|ctx| {});
///
/// Canvas::default()
/// .marker(symbols::Marker::Block)
/// .paint(|ctx| {});
/// ```
pub fn marker(mut self, marker: symbols::Marker) -> Canvas<'a, F> {
self.marker = marker;
self
}
}
impl<'a, F> Widget for Canvas<'a, F>
impl<F> Widget for Canvas<'_, F>
where
F: Fn(&mut Context<'_>),
{
@ -639,10 +564,11 @@ where
{
if ch != ' ' && ch != '\u{2800}' {
let (x, y) = (i % width, i / width);
buf.get_mut(x as u16 + canvas_area.left(), y as u16 + canvas_area.top())
.set_char(ch)
.set_fg(fg)
.set_bg(bg);
if let Some(cell) =
buf.cell_mut((x as u16 + canvas_area.left(), y as u16 + canvas_area.top()))
{
cell.set_char(ch).set_fg(fg).set_bg(bg);
}
}
}

View File

@ -0,0 +1,128 @@
use itertools::Itertools;
use tui::{
style::Color,
widgets::{
GraphType,
canvas::{Line as CanvasLine, Points},
},
};
use super::{Context, Data, Point, TimeChart};
impl TimeChart<'_> {
pub(crate) fn draw_points(&self, ctx: &mut Context<'_>) {
// Idea is to:
// - Go over all datasets, determine *where* a point will be drawn.
// - Last point wins for what gets drawn.
// - We set _all_ points for all datasets before actually rendering.
//
// By doing this, it's a bit more efficient from my experience than looping
// over each dataset and rendering a new layer each time.
//
// See https://github.com/ClementTsang/bottom/pull/918 and
// https://github.com/ClementTsang/bottom/pull/937 for the original motivation.
//
// We also additionally do some interpolation logic because we may get caught
// missing some points when drawing, but we generally want to avoid
// jarring gaps between the edges when there's a point that is off
// screen and so a line isn't drawn (right edge generally won't have this issue
// issue but it can happen in some cases).
for dataset in &self.datasets {
let Data::Some { times, values } = dataset.data else {
continue;
};
let Some(current_time) = times.last() else {
continue;
};
let color = dataset.style.fg.unwrap_or(Color::Reset);
let left_edge = self.x_axis.bounds.get_bounds()[0];
// TODO: (points_rework_v1) Can we instead modify the range so it's based on the epoch rather than having to convert?
// TODO: (points_rework_v1) Is this efficient? Or should I prune using take_while first?
for (curr, next) in values
.iter_along_base(times)
.rev()
.map(|(&time, &val)| {
let from_start: f64 =
(current_time.duration_since(time).as_millis() as f64).floor();
// XXX: Should this be generic over dataset.graph_type instead? That would allow us to move
// transformations behind a type - however, that also means that there's some complexity added.
(-from_start, self.scaling.scale(val))
})
.tuple_windows()
{
if curr.0 == left_edge {
// The current point hits the left edge. Draw just the current point and halt.
ctx.draw(&Points {
coords: &[curr],
color,
});
break;
} else if next.0 < left_edge {
// The next point goes past the left edge. Interpolate a point + the line and halt.
let interpolated = interpolate_point(&next, &curr, left_edge);
ctx.draw(&CanvasLine {
x1: curr.0,
y1: curr.1,
x2: left_edge,
y2: interpolated,
color,
});
break;
} else {
// Draw the current point and the line to the next point.
if let GraphType::Line = dataset.graph_type {
ctx.draw(&CanvasLine {
x1: curr.0,
y1: curr.1,
x2: next.0,
y2: next.1,
color,
});
} else {
ctx.draw(&Points {
coords: &[curr],
color,
});
}
}
}
}
}
}
/// Returns the y-axis value for a given `x`, given two points to draw a line
/// between.
fn interpolate_point(older_point: &Point, newer_point: &Point, x: f64) -> f64 {
let delta_x = newer_point.0 - older_point.0;
let delta_y = newer_point.1 - older_point.1;
let slope = delta_y / delta_x;
(older_point.1 + (x - older_point.0) * slope).max(0.0)
}
#[cfg(test)]
mod test {
use super::*;
#[test]
fn time_chart_test_interpolation() {
let data = [(-3.0, 8.0), (-1.0, 6.0), (0.0, 5.0)];
assert_eq!(interpolate_point(&data[1], &data[2], 0.0), 5.0);
assert_eq!(interpolate_point(&data[1], &data[2], -0.25), 5.25);
assert_eq!(interpolate_point(&data[1], &data[2], -0.5), 5.5);
assert_eq!(interpolate_point(&data[0], &data[1], -1.0), 6.0);
assert_eq!(interpolate_point(&data[0], &data[1], -1.5), 6.5);
assert_eq!(interpolate_point(&data[0], &data[1], -2.0), 7.0);
assert_eq!(interpolate_point(&data[0], &data[1], -2.5), 7.5);
assert_eq!(interpolate_point(&data[0], &data[1], -3.0), 8.0);
}
}

View File

@ -1,4 +0,0 @@
//! Components derived from ratatui widgets.
pub mod pipe_gauge;
pub mod time_chart;

View File

@ -1,219 +0,0 @@
use tui::{
style::Color,
widgets::{
canvas::{Line as CanvasLine, Points},
GraphType,
},
};
use super::{Context, Dataset, Point, TimeChart};
use crate::utils::general::partial_ordering;
impl TimeChart<'_> {
pub(crate) fn draw_points(&self, ctx: &mut Context<'_>) {
// Idea is to:
// - Go over all datasets, determine *where* a point will be drawn.
// - Last point wins for what gets drawn.
// - We set _all_ points for all datasets before actually rendering.
//
// By doing this, it's a bit more efficient from my experience than looping
// over each dataset and rendering a new layer each time.
//
// See <https://github.com/ClementTsang/bottom/pull/918> and <https://github.com/ClementTsang/bottom/pull/937>
// for the original motivation.
//
// We also additionally do some interpolation logic because we may get caught
// missing some points when drawing, but we generally want to avoid
// jarring gaps between the edges when there's a point that is off
// screen and so a line isn't drawn (right edge generally won't have this issue
// issue but it can happen in some cases).
for dataset in &self.datasets {
let color = dataset.style.fg.unwrap_or(Color::Reset);
let start_bound = self.x_axis.bounds[0];
let end_bound = self.x_axis.bounds[1];
let (start_index, interpolate_start) = get_start(dataset, start_bound);
let (end_index, interpolate_end) = get_end(dataset, end_bound);
let data_slice = &dataset.data[start_index..end_index];
if let Some(interpolate_start) = interpolate_start {
if let (Some(older_point), Some(newer_point)) = (
dataset.data.get(interpolate_start),
dataset.data.get(interpolate_start + 1),
) {
let interpolated_point = (
self.x_axis.bounds[0],
interpolate_point(older_point, newer_point, self.x_axis.bounds[0]),
);
if let GraphType::Line = dataset.graph_type {
ctx.draw(&CanvasLine {
x1: interpolated_point.0,
y1: interpolated_point.1,
x2: newer_point.0,
y2: newer_point.1,
color,
});
} else {
ctx.draw(&Points {
coords: &[interpolated_point],
color,
});
}
}
}
if let GraphType::Line = dataset.graph_type {
for data in data_slice.windows(2) {
ctx.draw(&CanvasLine {
x1: data[0].0,
y1: data[0].1,
x2: data[1].0,
y2: data[1].1,
color,
});
}
} else {
ctx.draw(&Points {
coords: data_slice,
color,
});
}
if let Some(interpolate_end) = interpolate_end {
if let (Some(older_point), Some(newer_point)) = (
dataset.data.get(interpolate_end - 1),
dataset.data.get(interpolate_end),
) {
let interpolated_point = (
self.x_axis.bounds[1],
interpolate_point(older_point, newer_point, self.x_axis.bounds[1]),
);
if let GraphType::Line = dataset.graph_type {
ctx.draw(&CanvasLine {
x1: older_point.0,
y1: older_point.1,
x2: interpolated_point.0,
y2: interpolated_point.1,
color,
});
} else {
ctx.draw(&Points {
coords: &[interpolated_point],
color,
});
}
}
}
}
}
}
/// Returns the start index and potential interpolation index given the start
/// time and the dataset.
fn get_start(dataset: &Dataset<'_>, start_bound: f64) -> (usize, Option<usize>) {
match dataset
.data
.binary_search_by(|(x, _y)| partial_ordering(x, &start_bound))
{
Ok(index) => (index, None),
Err(index) => (index, index.checked_sub(1)),
}
}
/// Returns the end position and potential interpolation index given the end
/// time and the dataset.
fn get_end(dataset: &Dataset<'_>, end_bound: f64) -> (usize, Option<usize>) {
match dataset
.data
.binary_search_by(|(x, _y)| partial_ordering(x, &end_bound))
{
// In the success case, this means we found an index. Add one since we want to include this
// index and we expect to use the returned index as part of a (m..n) range.
Ok(index) => (index.saturating_add(1), None),
// In the fail case, this means we did not find an index, and the returned index is where
// one would *insert* the location. This index is where one would insert to fit
// inside the dataset - and since this is an end bound, index is, in a sense,
// already "+1" for our range later.
Err(index) => (index, {
let sum = index.checked_add(1);
match sum {
Some(s) if s < dataset.data.len() => sum,
_ => None,
}
}),
}
}
/// Returns the y-axis value for a given `x`, given two points to draw a line
/// between.
fn interpolate_point(older_point: &Point, newer_point: &Point, x: f64) -> f64 {
let delta_x = newer_point.0 - older_point.0;
let delta_y = newer_point.1 - older_point.1;
let slope = delta_y / delta_x;
(older_point.1 + (x - older_point.0) * slope).max(0.0)
}
#[cfg(test)]
mod test {
use super::*;
#[test]
fn time_chart_test_interpolation() {
let data = [(-3.0, 8.0), (-1.0, 6.0), (0.0, 5.0)];
assert_eq!(interpolate_point(&data[1], &data[2], 0.0), 5.0);
assert_eq!(interpolate_point(&data[1], &data[2], -0.25), 5.25);
assert_eq!(interpolate_point(&data[1], &data[2], -0.5), 5.5);
assert_eq!(interpolate_point(&data[0], &data[1], -1.0), 6.0);
assert_eq!(interpolate_point(&data[0], &data[1], -1.5), 6.5);
assert_eq!(interpolate_point(&data[0], &data[1], -2.0), 7.0);
assert_eq!(interpolate_point(&data[0], &data[1], -2.5), 7.5);
assert_eq!(interpolate_point(&data[0], &data[1], -3.0), 8.0);
}
#[test]
fn time_chart_empty_dataset() {
let data = [];
let dataset = Dataset::default().data(&data);
assert_eq!(get_start(&dataset, -100.0), (0, None));
assert_eq!(get_start(&dataset, -3.0), (0, None));
assert_eq!(get_end(&dataset, 0.0), (0, None));
assert_eq!(get_end(&dataset, 100.0), (0, None));
}
#[test]
fn time_chart_test_data_trimming() {
let data = [
(-3.0, 8.0),
(-2.5, 15.0),
(-2.0, 9.0),
(-1.0, 6.0),
(0.0, 5.0),
];
let dataset = Dataset::default().data(&data);
// Test start point cases (miss and hit)
assert_eq!(get_start(&dataset, -100.0), (0, None));
assert_eq!(get_start(&dataset, -3.0), (0, None));
assert_eq!(get_start(&dataset, -2.8), (1, Some(0)));
assert_eq!(get_start(&dataset, -2.5), (1, None));
assert_eq!(get_start(&dataset, -2.4), (2, Some(1)));
// Test end point cases (miss and hit)
assert_eq!(get_end(&dataset, -2.5), (2, None));
assert_eq!(get_end(&dataset, -2.4), (2, Some(3)));
assert_eq!(get_end(&dataset, -1.4), (3, Some(4)));
assert_eq!(get_end(&dataset, -1.0), (4, None));
assert_eq!(get_end(&dataset, 0.0), (5, None));
assert_eq!(get_end(&dataset, 1.0), (5, None));
assert_eq!(get_end(&dataset, 100.0), (5, None));
}
}

View File

@ -1,12 +1,12 @@
use tui::{
Frame,
layout::{Alignment, Constraint, Direction, Layout, Rect},
terminal::Frame,
text::{Line, Span},
widgets::{Block, Paragraph},
};
use crate::{
app::{layout_manager::BottomWidgetType, App},
app::{App, layout_manager::BottomWidgetType},
canvas::Painter,
};
@ -84,26 +84,25 @@ impl Painter {
},
);
// TODO: I can do this text effect as just a border now!
let left_name = left_table.get_pretty_name();
let right_name = right_table.get_pretty_name();
let num_spaces =
usize::from(draw_loc.width).saturating_sub(6 + left_name.len() + right_name.len());
let carousel_text_style = if widget_id == app_state.current_widget.widget_id {
self.styles.highlighted_border_style
} else {
self.styles.text_style
};
let left_arrow_text = vec![
Line::default(),
Line::from(Span::styled(
format!("{left_name}"),
self.colours.text_style,
)),
Line::from(Span::styled(format!("{left_name}"), carousel_text_style)),
];
let right_arrow_text = vec![
Line::default(),
Line::from(Span::styled(
format!("{right_name}"),
self.colours.text_style,
)),
Line::from(Span::styled(format!("{right_name}"), carousel_text_style)),
];
let margined_draw_loc = Layout::default()

View File

@ -2,21 +2,18 @@
use std::cmp::min;
use tui::{
Frame,
layout::{Alignment, Constraint, Direction, Layout, Rect},
terminal::Frame,
text::{Line, Span, Text},
widgets::{Block, Borders, Paragraph, Wrap},
widgets::{Block, Paragraph, Wrap},
};
use crate::{
app::{App, KillSignal, MAX_PROCESS_SIGNAL},
canvas::Painter,
canvas::{Painter, drawing_utils::dialog_block},
widgets::ProcWidgetMode,
};
const DD_BASE: &str = " Confirm Kill Process ── Esc to close ";
const DD_ERROR_BASE: &str = " Error ── Esc to close ";
cfg_if::cfg_if! {
if #[cfg(target_os = "linux")] {
const SIGNAL_TEXT: [&str; 63] = [
@ -211,12 +208,12 @@ impl Painter {
if MAX_PROCESS_SIGNAL == 1 || !app_state.app_config_fields.is_advanced_kill {
let (yes_button, no_button) = match app_state.delete_dialog_state.selected_signal {
KillSignal::Kill(_) => (
Span::styled("Yes", self.colours.selected_text_style),
Span::styled("No", self.colours.text_style),
Span::styled("Yes", self.styles.selected_text_style),
Span::styled("No", self.styles.text_style),
),
KillSignal::Cancel => (
Span::styled("Yes", self.colours.text_style),
Span::styled("No", self.colours.selected_text_style),
Span::styled("Yes", self.styles.text_style),
Span::styled("No", self.styles.selected_text_style),
),
};
@ -322,11 +319,11 @@ impl Painter {
let mut buttons = SIGNAL_TEXT
[scroll_offset + 1..min((layout.len()) + scroll_offset, SIGNAL_TEXT.len())]
.iter()
.map(|text| Span::styled(*text, self.colours.text_style))
.map(|text| Span::styled(*text, self.styles.text_style))
.collect::<Vec<Span<'_>>>();
buttons.insert(0, Span::styled(SIGNAL_TEXT[0], self.colours.text_style));
buttons.insert(0, Span::styled(SIGNAL_TEXT[0], self.styles.text_style));
buttons[selected - scroll_offset] =
Span::styled(SIGNAL_TEXT[selected], self.colours.selected_text_style);
Span::styled(SIGNAL_TEXT[selected], self.styles.selected_text_style);
app_state.delete_dialog_state.button_positions = layout
.iter()
@ -354,45 +351,24 @@ impl Painter {
) -> bool {
if let Some(dd_text) = dd_text {
let dd_title = if app_state.dd_err.is_some() {
Line::from(vec![
Span::styled(" Error ", self.colours.widget_title_style),
Span::styled(
format!(
"─{}─ Esc to close ",
"".repeat(
usize::from(draw_loc.width)
.saturating_sub(DD_ERROR_BASE.chars().count() + 2)
)
),
self.colours.border_style,
),
])
Line::styled(" Error ", self.styles.widget_title_style)
} else {
Line::from(vec![
Span::styled(" Confirm Kill Process ", self.colours.widget_title_style),
Span::styled(
format!(
"─{}─ Esc to close ",
"".repeat(
usize::from(draw_loc.width)
.saturating_sub(DD_BASE.chars().count() + 2)
)
),
self.colours.border_style,
),
])
Line::styled(" Confirm Kill Process ", self.styles.widget_title_style)
};
f.render_widget(
Paragraph::new(dd_text)
.block(
Block::default()
.title(dd_title)
.style(self.colours.border_style)
.borders(Borders::ALL)
.border_style(self.colours.border_style),
dialog_block(self.styles.border_type)
.title_top(dd_title)
.title_top(
Line::styled(" Esc to close ", self.styles.widget_title_style)
.right_aligned(),
)
.style(self.styles.border_style)
.border_style(self.styles.border_style),
)
.style(self.colours.text_style)
.style(self.styles.text_style)
.alignment(Alignment::Center)
.wrap(Wrap { trim: true }),
draw_loc,

View File

@ -1,21 +1,19 @@
use std::cmp::{max, min};
use tui::{
Frame,
layout::{Alignment, Rect},
terminal::Frame,
text::{Line, Span},
widgets::{Block, Borders, Paragraph, Wrap},
widgets::{Paragraph, Wrap},
};
use unicode_width::UnicodeWidthStr;
use crate::{
app::App,
canvas::Painter,
canvas::{Painter, drawing_utils::dialog_block},
constants::{self, HELP_TEXT},
};
const HELP_BASE: &str = " Help ── Esc to close ";
// TODO: [REFACTOR] Make generic dialog boxes to build off of instead?
impl Painter {
fn help_text_lines(&self) -> Vec<Line<'_>> {
@ -28,12 +26,12 @@ impl Painter {
if itx > 0 {
if let Some(header) = section.next() {
styled_help_spans.push(Span::default());
styled_help_spans.push(Span::styled(*header, self.colours.table_header_style));
styled_help_spans.push(Span::styled(*header, self.styles.table_header_style));
}
}
section.for_each(|&text| {
styled_help_spans.push(Span::styled(text, self.colours.text_style))
styled_help_spans.push(Span::styled(text, self.styles.text_style))
});
});
@ -43,24 +41,12 @@ impl Painter {
pub fn draw_help_dialog(&self, f: &mut Frame<'_>, app_state: &mut App, draw_loc: Rect) {
let styled_help_text = self.help_text_lines();
let help_title = Line::from(vec![
Span::styled(" Help ", self.colours.widget_title_style),
Span::styled(
format!(
"─{}─ Esc to close ",
"".repeat(
usize::from(draw_loc.width).saturating_sub(HELP_BASE.chars().count() + 2)
)
),
self.colours.border_style,
),
]);
let block = Block::default()
.title(help_title)
.style(self.colours.border_style)
.borders(Borders::ALL)
.border_style(self.colours.border_style);
let block = dialog_block(self.styles.border_type)
.border_style(self.styles.border_style)
.title_top(Line::styled(" Help ", self.styles.widget_title_style))
.title_top(
Line::styled(" Esc to close ", self.styles.widget_title_style).right_aligned(),
);
if app_state.should_get_widget_bounds() {
// We must also recalculate how many lines are wrapping to properly get
@ -116,7 +102,7 @@ impl Painter {
f.render_widget(
Paragraph::new(styled_help_text.clone())
.block(block)
.style(self.colours.text_style)
.style(self.styles.text_style)
.alignment(Alignment::Left)
.wrap(Wrap { trim: true })
.scroll((

View File

@ -1,14 +1,11 @@
use std::{cmp::min, time::Instant};
use std::time::Instant;
use tui::layout::Rect;
use tui::{
layout::Rect,
widgets::{Block, BorderType, Borders},
};
/// Calculate how many bars are to be drawn within basic mode's components.
pub fn calculate_basic_use_bars(use_percentage: f64, num_bars_available: usize) -> usize {
min(
(num_bars_available as f64 * use_percentage / 100.0).round() as usize,
num_bars_available,
)
}
use super::SIDE_BORDERS;
/// Determine whether a graph x-label should be hidden.
pub fn should_hide_x_label(
@ -30,25 +27,35 @@ pub fn should_hide_x_label(
}
}
/// Return a widget block.
pub fn widget_block(is_basic: bool, is_selected: bool, border_type: BorderType) -> Block<'static> {
let mut block = Block::default().border_type(border_type);
if is_basic {
if is_selected {
block = block.borders(SIDE_BORDERS);
} else {
block = block.borders(Borders::empty());
}
} else {
block = block.borders(Borders::all());
}
block
}
/// Return a dialog block.
pub fn dialog_block(border_type: BorderType) -> Block<'static> {
Block::default()
.border_type(border_type)
.borders(Borders::all())
}
#[cfg(test)]
mod test {
use super::*;
#[test]
fn test_calculate_basic_use_bars() {
// Testing various breakpoints and edge cases.
assert_eq!(calculate_basic_use_bars(0.0, 15), 0);
assert_eq!(calculate_basic_use_bars(1.0, 15), 0);
assert_eq!(calculate_basic_use_bars(5.0, 15), 1);
assert_eq!(calculate_basic_use_bars(10.0, 15), 2);
assert_eq!(calculate_basic_use_bars(40.0, 15), 6);
assert_eq!(calculate_basic_use_bars(45.0, 15), 7);
assert_eq!(calculate_basic_use_bars(50.0, 15), 8);
assert_eq!(calculate_basic_use_bars(100.0, 15), 15);
assert_eq!(calculate_basic_use_bars(150.0, 15), 15);
}
#[test]
fn test_should_hide_x_label() {
use std::time::{Duration, Instant};

View File

@ -1,4 +1,3 @@
pub mod battery_display;
pub mod cpu_basic;
pub mod cpu_graph;
pub mod disk_table;
@ -8,3 +7,6 @@ pub mod network_basic;
pub mod network_graph;
pub mod process_table;
pub mod temperature_table;
#[cfg(feature = "battery")]
pub mod battery_display;

View File

@ -1,23 +1,31 @@
use std::cmp::min;
use tui::{
Frame,
layout::{Constraint, Direction, Layout, Rect},
terminal::Frame,
text::{Line, Span},
widgets::{Block, Borders, Cell, Paragraph, Row, Table, Tabs},
widgets::{Cell, Paragraph, Row, Table, Tabs},
};
use unicode_segmentation::UnicodeSegmentation;
use unicode_width::UnicodeWidthStr;
use crate::{
app::App,
canvas::{drawing_utils::calculate_basic_use_bars, Painter},
canvas::{Painter, drawing_utils::widget_block},
collection::batteries::BatteryState,
constants::*,
data_conversion::BatteryDuration,
};
/// Calculate how many bars are to be drawn within basic mode's components.
fn calculate_basic_use_bars(use_percentage: f64, num_bars_available: usize) -> usize {
min(
(num_bars_available as f64 * use_percentage / 100.0).round() as usize,
num_bars_available,
)
}
impl Painter {
pub fn draw_battery(
&self, f: &mut Frame<'_>, app_state: &mut App, draw_loc: Rect, draw_border: bool,
widget_id: u64,
&self, f: &mut Frame<'_>, app_state: &mut App, draw_loc: Rect, widget_id: u64,
) {
let should_get_widget_bounds = app_state.should_get_widget_bounds();
if let Some(battery_widget_state) = app_state
@ -26,11 +34,11 @@ impl Painter {
.widget_states
.get_mut(&widget_id)
{
let is_on_widget = widget_id == app_state.current_widget.widget_id;
let border_style = if is_on_widget {
self.colours.highlighted_border_style
let is_selected = widget_id == app_state.current_widget.widget_id;
let border_style = if is_selected {
self.styles.highlighted_border_style
} else {
self.colours.border_style
self.styles.border_style
};
let table_gap = if draw_loc.height < TABLE_GAP_HEIGHT_LIMIT {
0
@ -38,41 +46,28 @@ impl Painter {
app_state.app_config_fields.table_gap
};
let title = if app_state.is_expanded {
const TITLE_BASE: &str = " Battery ── Esc to go back ";
Line::from(vec![
Span::styled(" Battery ", self.colours.widget_title_style),
Span::styled(
format!(
"─{}─ Esc to go back ",
"".repeat(usize::from(draw_loc.width).saturating_sub(
UnicodeSegmentation::graphemes(TITLE_BASE, true).count() + 2
))
),
border_style,
),
])
} else {
Line::from(Span::styled(" Battery ", self.colours.widget_title_style))
let block = {
let mut block = widget_block(
app_state.app_config_fields.use_basic_mode,
is_selected,
self.styles.border_type,
)
.border_style(border_style)
.title_top(Line::styled(" Battery ", self.styles.widget_title_style));
if app_state.is_expanded {
block = block.title_top(
Line::styled(" Esc to go back ", self.styles.widget_title_style)
.right_aligned(),
)
}
block
};
let battery_block = if draw_border {
Block::default()
.title(title)
.borders(Borders::ALL)
.border_style(border_style)
} else if is_on_widget {
Block::default()
.borders(SIDE_BORDERS)
.border_style(self.colours.highlighted_border_style)
} else {
Block::default().borders(Borders::NONE)
};
if app_state.converted_data.battery_data.len() > 1 {
let battery_names = app_state
.converted_data
.battery_data
let battery_harvest = &(app_state.data_store.get_data().battery_harvest);
if battery_harvest.len() > 1 {
let battery_names = battery_harvest
.iter()
.enumerate()
.map(|(itx, _)| format!("Battery {itx}"))
@ -95,8 +90,8 @@ impl Painter {
.collect::<Vec<_>>(),
)
.divider(tui::symbols::line::VERTICAL)
.style(self.colours.text_style)
.highlight_style(self.colours.selected_text_style)
.style(self.styles.text_style)
.highlight_style(self.styles.selected_text_style)
.select(battery_widget_state.currently_selected_battery_index),
tab_draw_loc,
);
@ -120,68 +115,63 @@ impl Painter {
}
}
let is_basic = app_state.app_config_fields.use_basic_mode;
let margined_draw_loc = Layout::default()
.constraints([Constraint::Percentage(100)])
.horizontal_margin(u16::from(!(is_on_widget || draw_border)))
.horizontal_margin(u16::from(is_basic && !is_selected))
.direction(Direction::Horizontal)
.split(draw_loc)[0];
if let Some(battery_details) = app_state
.converted_data
.battery_data
.get(battery_widget_state.currently_selected_battery_index)
if let Some(battery_details) =
battery_harvest.get(battery_widget_state.currently_selected_battery_index)
{
let full_width = draw_loc.width.saturating_sub(2);
let bar_length = usize::from(full_width.saturating_sub(6));
let charge_percentage = battery_details.charge_percentage;
let num_bars = calculate_basic_use_bars(charge_percentage, bar_length);
let charge_percent = battery_details.charge_percent;
let num_bars = calculate_basic_use_bars(charge_percent, bar_length);
let bars = format!(
"[{}{}{:3.0}%]",
"|".repeat(num_bars),
" ".repeat(bar_length - num_bars),
charge_percentage,
charge_percent,
);
let mut battery_charge_rows = Vec::with_capacity(2);
battery_charge_rows.push(Row::new([
Cell::from("Charge").style(self.colours.text_style)
Cell::from("Charge").style(self.styles.text_style)
]));
battery_charge_rows.push(Row::new([Cell::from(bars).style(
if charge_percentage < 10.0 {
self.colours.low_battery
} else if charge_percentage < 50.0 {
self.colours.medium_battery
if charge_percent < 10.0 {
self.styles.low_battery
} else if charge_percent < 50.0 {
self.styles.medium_battery
} else {
self.colours.high_battery
self.styles.high_battery
},
)]));
let mut battery_rows = Vec::with_capacity(3);
let watt_consumption = battery_details.watt_consumption();
let health = battery_details.health();
battery_rows.push(Row::new([""]).bottom_margin(table_gap + 1));
battery_rows.push(
Row::new(["Rate", &battery_details.watt_consumption])
.style(self.colours.text_style),
);
battery_rows
.push(Row::new(["Rate", &watt_consumption]).style(self.styles.text_style));
battery_rows.push(
Row::new(["State", &battery_details.state]).style(self.colours.text_style),
Row::new(["State", battery_details.state.as_str()])
.style(self.styles.text_style),
);
let mut time: String; // Keep string lifetime in scope.
{
let style = self.colours.text_style;
match &battery_details.battery_duration {
BatteryDuration::ToEmpty(secs) => {
time = long_time(*secs);
if full_width as usize > time.len() {
battery_rows.push(Row::new(["Time to empty", &time]).style(style));
} else {
time = short_time(*secs);
battery_rows.push(Row::new(["To empty", &time]).style(style));
}
}
BatteryDuration::ToFull(secs) => {
let style = self.styles.text_style;
match &battery_details.state {
BatteryState::Charging {
time_to_full: Some(secs),
} => {
time = long_time(*secs);
if full_width as usize > time.len() {
@ -191,17 +181,25 @@ impl Painter {
battery_rows.push(Row::new(["To full", &time]).style(style));
}
}
BatteryDuration::Empty
| BatteryDuration::Full
| BatteryDuration::Unknown => {}
BatteryState::Discharging {
time_to_empty: Some(secs),
} => {
time = long_time(*secs);
if full_width as usize > time.len() {
battery_rows.push(Row::new(["Time to empty", &time]).style(style));
} else {
time = short_time(*secs);
battery_rows.push(Row::new(["To empty", &time]).style(style));
}
}
_ => {}
}
}
battery_rows.push(
Row::new(["Health", &battery_details.health]).style(self.colours.text_style),
);
battery_rows.push(Row::new(["Health", &health]).style(self.styles.text_style));
let header = if app_state.converted_data.battery_data.len() > 1 {
let header = if battery_harvest.len() > 1 {
Row::new([""]).bottom_margin(table_gap)
} else {
Row::default()
@ -210,7 +208,7 @@ impl Painter {
// Draw bar
f.render_widget(
Table::new(battery_charge_rows, [Constraint::Percentage(100)])
.block(battery_block.clone())
.block(block.clone())
.header(header.clone()),
margined_draw_loc,
);
@ -221,7 +219,7 @@ impl Painter {
battery_rows,
[Constraint::Percentage(50), Constraint::Percentage(50)],
)
.block(battery_block)
.block(block)
.header(header),
margined_draw_loc,
);
@ -230,13 +228,10 @@ impl Painter {
contents.push(Line::from(Span::styled(
"No data found for this battery",
self.colours.text_style,
self.styles.text_style,
)));
f.render_widget(
Paragraph::new(contents).block(battery_block),
margined_draw_loc,
);
f.render_widget(Paragraph::new(contents).block(block), margined_draw_loc);
}
if should_get_widget_bounds {
@ -253,8 +248,7 @@ impl Painter {
}
}
#[inline]
fn get_hms(secs: i64) -> (i64, i64, i64) {
fn get_hms(secs: u32) -> (u32, u32, u32) {
let hours = secs / (60 * 60);
let minutes = (secs / 60) - hours * 60;
let seconds = secs - minutes * 60 - hours * 60 * 60;
@ -262,31 +256,24 @@ fn get_hms(secs: i64) -> (i64, i64, i64) {
(hours, minutes, seconds)
}
fn long_time(secs: i64) -> String {
fn long_time(secs: u32) -> String {
let (hours, minutes, seconds) = get_hms(secs);
if hours > 0 {
format!(
"{} hour{}, {} minute{}, {} second{}",
hours,
if hours == 1 { "" } else { "s" },
minutes,
if minutes == 1 { "" } else { "s" },
seconds,
if seconds == 1 { "" } else { "s" },
)
let h = if hours == 1 { "hour" } else { "hours" };
let m = if minutes == 1 { "minute" } else { "minutes" };
let s = if seconds == 1 { "second" } else { "seconds" };
format!("{hours} {h}, {minutes} {m}, {seconds} {s}")
} else {
format!(
"{} minute{}, {} second{}",
minutes,
if minutes == 1 { "" } else { "s" },
seconds,
if seconds == 1 { "" } else { "s" },
)
let m = if minutes == 1 { "minute" } else { "minutes" };
let s = if seconds == 1 { "second" } else { "seconds" };
format!("{minutes} {m}, {seconds} {s}")
}
}
fn short_time(secs: i64) -> String {
fn short_time(secs: u32) -> String {
let (hours, minutes, seconds) = get_hms(secs);
if hours > 0 {
@ -331,4 +318,18 @@ mod tests {
assert_eq!(short_time(3601), "1h 0m 1s".to_string());
assert_eq!(short_time(3661), "1h 1m 1s".to_string());
}
#[test]
fn test_calculate_basic_use_bars() {
// Testing various breakpoints and edge cases.
assert_eq!(calculate_basic_use_bars(0.0, 15), 0);
assert_eq!(calculate_basic_use_bars(1.0, 15), 0);
assert_eq!(calculate_basic_use_bars(5.0, 15), 1);
assert_eq!(calculate_basic_use_bars(10.0, 15), 2);
assert_eq!(calculate_basic_use_bars(40.0, 15), 6);
assert_eq!(calculate_basic_use_bars(45.0, 15), 7);
assert_eq!(calculate_basic_use_bars(50.0, 15), 8);
assert_eq!(calculate_basic_use_bars(100.0, 15), 15);
assert_eq!(calculate_basic_use_bars(150.0, 15), 15);
}
}

View File

@ -1,20 +1,19 @@
use std::cmp::min;
use itertools::{Either, Itertools};
use tui::{
Frame,
layout::{Constraint, Direction, Layout, Rect},
terminal::Frame,
widgets::Block,
};
use crate::{
app::App,
canvas::{
components::pipe_gauge::{LabelLimit, PipeGauge},
Painter,
components::pipe_gauge::{LabelLimit, PipeGauge},
drawing_utils::widget_block,
},
constants::*,
data_collection::cpu::CpuDataType,
data_conversion::CpuWidgetData,
collection::cpu::{CpuData, CpuDataType},
};
impl Painter {
@ -22,34 +21,34 @@ impl Painter {
pub fn draw_basic_cpu(
&self, f: &mut Frame<'_>, app_state: &mut App, mut draw_loc: Rect, widget_id: u64,
) {
// Skip the first element, it's the "all" element
if app_state.converted_data.cpu_data.len() > 1 {
let cpu_data: &[CpuWidgetData] = &app_state.converted_data.cpu_data[1..];
let cpu_data = &app_state.data_store.get_data().cpu_harvest;
// This is a bit complicated, but basically, we want to draw SOME number
// of columns to draw all CPUs. Ideally, as well, we want to not have
// to ever scroll.
// **General logic** - count number of elements in cpu_data. Then see how
// many rows and columns we have in draw_loc (-2 on both sides for border?).
// I think what we can do is try to fit in as many in one column as possible.
// If not, then add a new column.
// Then, from this, split the row space across ALL columns. From there,
// generate the desired lengths.
// This is a bit complicated, but basically, we want to draw SOME number
// of columns to draw all CPUs. Ideally, as well, we want to not have
// to ever scroll.
//
// **General logic** - count number of elements in cpu_data. Then see how
// many rows and columns we have in draw_loc (-2 on both sides for border?).
// I think what we can do is try to fit in as many in one column as possible.
// If not, then add a new column. Then, from this, split the row space across ALL columns.
// From there, generate the desired lengths.
if app_state.current_widget.widget_id == widget_id {
f.render_widget(
Block::default()
.borders(SIDE_BORDERS)
.border_style(self.colours.highlighted_border_style),
draw_loc,
);
}
if app_state.current_widget.widget_id == widget_id {
f.render_widget(
widget_block(true, true, self.styles.border_type)
.border_style(self.styles.highlighted_border_style),
draw_loc,
);
}
let (cpu_data, avg_data) =
maybe_split_avg(cpu_data, app_state.app_config_fields.dedicated_average_row);
if let Some(avg) = avg_data {
let (outer, inner, ratio, style) = self.cpu_info(&avg);
// TODO: This is pretty ugly. Is there a better way of doing it?
let mut cpu_iter = Either::Right(cpu_data.iter());
if app_state.app_config_fields.dedicated_average_row {
if let Some((index, avg)) = cpu_data
.iter()
.find_position(|&datum| matches!(datum.data_type, CpuDataType::Avg))
{
let (outer, inner, ratio, style) = self.cpu_info(avg);
let [cores_loc, mut avg_loc] =
Layout::vertical([Constraint::Min(0), Constraint::Length(1)]).areas(draw_loc);
@ -69,67 +68,66 @@ impl Painter {
);
draw_loc = cores_loc;
cpu_iter = Either::Left(cpu_data.iter().skip(index));
}
}
if draw_loc.height > 0 {
let remaining_height = usize::from(draw_loc.height);
const REQUIRED_COLUMNS: usize = 4;
if draw_loc.height > 0 {
let remaining_height = usize::from(draw_loc.height);
const REQUIRED_COLUMNS: usize = 4;
let col_constraints =
vec![Constraint::Percentage((100 / REQUIRED_COLUMNS) as u16); REQUIRED_COLUMNS];
let columns = Layout::default()
.constraints(col_constraints)
.direction(Direction::Horizontal)
.split(draw_loc);
let col_constraints =
vec![Constraint::Percentage((100 / REQUIRED_COLUMNS) as u16); REQUIRED_COLUMNS];
let columns = Layout::default()
.constraints(col_constraints)
.direction(Direction::Horizontal)
.split(draw_loc);
let mut gauge_info = cpu_data.iter().map(|cpu| self.cpu_info(cpu));
let mut gauge_info = cpu_iter.map(|cpu| self.cpu_info(cpu));
// Very ugly way to sync the gauge limit across all gauges.
let hide_parts = columns
.first()
.map(|col| {
if col.width >= 12 {
LabelLimit::None
} else if col.width >= 10 {
LabelLimit::Bars
} else {
LabelLimit::StartLabel
}
})
.unwrap_or_default();
// Very ugly way to sync the gauge limit across all gauges.
let hide_parts = columns
.first()
.map(|col| {
if col.width >= 12 {
LabelLimit::None
} else if col.width >= 10 {
LabelLimit::Bars
} else {
LabelLimit::StartLabel
}
})
.unwrap_or_default();
let num_entries = cpu_data.len();
let mut row_counter = num_entries;
for (itx, column) in columns.iter().enumerate() {
if REQUIRED_COLUMNS > itx {
let to_divide = REQUIRED_COLUMNS - itx;
let num_taken = min(
remaining_height,
(row_counter / to_divide) + usize::from(row_counter % to_divide != 0),
let num_entries = cpu_data.len();
let mut row_counter = num_entries;
for (itx, column) in columns.iter().enumerate() {
if REQUIRED_COLUMNS > itx {
let to_divide = REQUIRED_COLUMNS - itx;
let num_taken = min(
remaining_height,
(row_counter / to_divide) + usize::from(row_counter % to_divide != 0),
);
row_counter -= num_taken;
let chunk = (&mut gauge_info).take(num_taken);
let rows = Layout::default()
.direction(Direction::Vertical)
.constraints(vec![Constraint::Length(1); remaining_height])
.horizontal_margin(1)
.split(*column);
for ((start_label, inner_label, ratio, style), row) in chunk.zip(rows.iter()) {
f.render_widget(
PipeGauge::default()
.gauge_style(style)
.label_style(style)
.inner_label(inner_label)
.start_label(start_label)
.ratio(ratio)
.hide_parts(hide_parts),
*row,
);
row_counter -= num_taken;
let chunk = (&mut gauge_info).take(num_taken);
let rows = Layout::default()
.direction(Direction::Vertical)
.constraints(vec![Constraint::Length(1); remaining_height])
.horizontal_margin(1)
.split(*column);
for ((start_label, inner_label, ratio, style), row) in
chunk.zip(rows.iter())
{
f.render_widget(
PipeGauge::default()
.gauge_style(style)
.label_style(style)
.inner_label(inner_label)
.start_label(start_label)
.ratio(ratio)
.hide_parts(hide_parts),
*row,
);
}
}
}
}
@ -145,63 +143,19 @@ impl Painter {
}
}
fn cpu_info(&self, cpu: &CpuWidgetData) -> (String, String, f64, tui::style::Style) {
let CpuWidgetData::Entry {
data_type,
last_entry,
..
} = cpu
else {
unreachable!()
};
let (outer, style) = match data_type {
CpuDataType::Avg => ("AVG".to_string(), self.colours.avg_cpu_colour),
#[inline]
fn cpu_info(&self, data: &CpuData) -> (String, String, f64, tui::style::Style) {
let (outer, style) = match data.data_type {
CpuDataType::Avg => ("AVG".to_string(), self.styles.avg_cpu_colour),
CpuDataType::Cpu(index) => (
format!("{index:<3}",),
self.colours.cpu_colour_styles[index % self.colours.cpu_colour_styles.len()],
self.styles.cpu_colour_styles[index % self.styles.cpu_colour_styles.len()],
),
};
let inner = format!("{:>3.0}%", last_entry.round());
let ratio = last_entry / 100.0;
let inner = format!("{:>3.0}%", data.cpu_usage.round());
let ratio = data.cpu_usage / 100.0;
(outer, inner, ratio, style)
}
}
fn maybe_split_avg(
data: &[CpuWidgetData], separate_avg: bool,
) -> (Vec<CpuWidgetData>, Option<CpuWidgetData>) {
let mut cpu_data = vec![];
let mut avg_data = None;
for cpu in data {
let CpuWidgetData::Entry {
data_type,
data,
last_entry,
} = cpu
else {
unreachable!()
};
match data_type {
CpuDataType::Avg if separate_avg => {
avg_data = Some(CpuWidgetData::Entry {
data_type: *data_type,
data: data.clone(),
last_entry: *last_entry,
});
}
_ => {
cpu_data.push(CpuWidgetData::Entry {
data_type: *data_type,
data: data.clone(),
last_entry: *last_entry,
});
}
}
}
(cpu_data, avg_data)
}

View File

@ -1,22 +1,22 @@
use std::borrow::Cow;
use tui::{
Frame,
layout::{Constraint, Direction, Layout, Rect},
symbols::Marker,
terminal::Frame,
};
use crate::{
app::{layout_manager::WidgetDirection, App},
app::{App, data::StoredData, layout_manager::WidgetDirection},
canvas::{
Painter,
components::{
data_table::{DrawInfo, SelectionState},
time_graph::{GraphData, TimeGraph},
time_graph::{AxisBound, GraphData, TimeGraph},
},
drawing_utils::should_hide_x_label,
Painter,
},
data_conversion::CpuWidgetData,
collection::cpu::CpuData,
widgets::CpuWidgetState,
};
@ -120,56 +120,50 @@ impl Painter {
}
fn generate_points<'a>(
&self, cpu_widget_state: &CpuWidgetState, cpu_data: &'a [CpuWidgetData], show_avg_cpu: bool,
&self, cpu_widget_state: &'a mut CpuWidgetState, data: &'a StoredData, show_avg_cpu: bool,
) -> Vec<GraphData<'a>> {
let show_avg_offset = if show_avg_cpu { AVG_POSITION } else { 0 };
let current_scroll_position = cpu_widget_state.table.state.current_index;
let cpu_entries = &data.cpu_harvest;
let cpu_points = &data.timeseries_data.cpu;
let time = &data.timeseries_data.time;
if current_scroll_position == ALL_POSITION {
// This case ensures the other cases cannot have the position be equal to 0.
cpu_data
cpu_points
.iter()
.enumerate()
.rev()
.filter_map(|(itx, cpu)| {
match &cpu {
CpuWidgetData::All => None,
CpuWidgetData::Entry { data, .. } => {
let style = if show_avg_cpu && itx == AVG_POSITION {
self.colours.avg_cpu_colour
} else if itx == ALL_POSITION {
self.colours.all_cpu_colour
} else {
let offset_position = itx - 1; // Because of the all position
self.colours.cpu_colour_styles[(offset_position - show_avg_offset)
% self.colours.cpu_colour_styles.len()]
};
.map(|(itx, values)| {
let style = if show_avg_cpu && itx == AVG_POSITION {
self.styles.avg_cpu_colour
} else if itx == ALL_POSITION {
self.styles.all_cpu_colour
} else {
self.styles.cpu_colour_styles
[(itx - show_avg_offset) % self.styles.cpu_colour_styles.len()]
};
Some(GraphData {
points: &data[..],
style,
name: None,
})
}
}
GraphData::default().style(style).time(time).values(values)
})
.collect::<Vec<_>>()
} else if let Some(CpuWidgetData::Entry { data, .. }) =
cpu_data.get(current_scroll_position)
{
.collect()
} else if let Some(CpuData { .. }) = cpu_entries.get(current_scroll_position - 1) {
// We generally subtract one from current scroll position because of the all entry.
let style = if show_avg_cpu && current_scroll_position == AVG_POSITION {
self.colours.avg_cpu_colour
self.styles.avg_cpu_colour
} else {
let offset_position = current_scroll_position - 1; // Because of the all position
self.colours.cpu_colour_styles
[(offset_position - show_avg_offset) % self.colours.cpu_colour_styles.len()]
let offset_position = current_scroll_position - 1;
self.styles.cpu_colour_styles
[(offset_position - show_avg_offset) % self.styles.cpu_colour_styles.len()]
};
vec![GraphData {
points: &data[..],
style,
name: None,
}]
vec![
GraphData::default()
.style(style)
.time(time)
.values(&cpu_points[current_scroll_position - 1]),
]
} else {
vec![]
}
@ -178,14 +172,15 @@ impl Painter {
fn draw_cpu_graph(
&self, f: &mut Frame<'_>, app_state: &mut App, draw_loc: Rect, widget_id: u64,
) {
const Y_BOUNDS: [f64; 2] = [0.0, 100.5];
const Y_BOUNDS: AxisBound = AxisBound::Max(100.5);
const Y_LABELS: [Cow<'static, str>; 2] = [Cow::Borrowed(" 0%"), Cow::Borrowed("100%")];
if let Some(cpu_widget_state) = app_state.states.cpu_state.widget_states.get_mut(&widget_id)
{
let cpu_data = &app_state.converted_data.cpu_data;
let data = app_state.data_store.get_data();
let border_style = self.get_border_style(widget_id, app_state.current_widget.widget_id);
let x_bounds = [0, cpu_widget_state.current_display_time];
let x_min = -(cpu_widget_state.current_display_time as f64);
let hide_x_labels = should_hide_x_label(
app_state.app_config_fields.hide_time,
app_state.app_config_fields.autohide_time,
@ -193,9 +188,9 @@ impl Painter {
draw_loc,
);
let points = self.generate_points(
let graph_data = self.generate_points(
cpu_widget_state,
cpu_data,
data,
app_state.app_config_fields.show_average_cpu,
);
@ -203,7 +198,7 @@ impl Painter {
let title = {
#[cfg(target_family = "unix")]
{
let load_avg = app_state.converted_data.load_avg_data;
let load_avg = &data.load_avg_harvest;
let load_avg_str = format!(
"─ {:.2} {:.2} {:.2} ",
load_avg[0], load_avg[1], load_avg[2]
@ -224,20 +219,23 @@ impl Painter {
};
TimeGraph {
x_bounds,
x_min,
hide_x_labels,
y_bounds: Y_BOUNDS,
y_labels: &Y_LABELS,
graph_style: self.colours.graph_style,
graph_style: self.styles.graph_style,
border_style,
border_type: self.styles.border_type,
title,
is_selected: app_state.current_widget.widget_id == widget_id,
is_expanded: app_state.is_expanded,
title_style: self.colours.widget_title_style,
title_style: self.styles.widget_title_style,
legend_position: None,
legend_constraints: None,
marker,
scaling: Default::default(),
}
.draw_time_graph(f, draw_loc, &points);
.draw_time_graph(f, draw_loc, graph_data);
}
}

View File

@ -1,10 +1,10 @@
use tui::{layout::Rect, terminal::Frame};
use tui::{Frame, layout::Rect};
use crate::{
app,
canvas::{
components::data_table::{DrawInfo, SelectionState},
Painter,
components::data_table::{DrawInfo, SelectionState},
},
};

View File

@ -1,176 +1,164 @@
use std::borrow::Cow;
use tui::{
Frame,
layout::{Constraint, Direction, Layout, Rect},
terminal::Frame,
widgets::Block,
};
use crate::{
app::App,
canvas::{components::pipe_gauge::PipeGauge, Painter},
constants::*,
canvas::{Painter, components::pipe_gauge::PipeGauge, drawing_utils::widget_block},
collection::memory::MemData,
get_binary_unit_and_denominator,
};
/// Convert memory info into a string representing a fraction.
#[inline]
fn memory_fraction_label(data: &MemData) -> Cow<'static, str> {
let total_bytes = data.total_bytes.get();
let (unit, denominator) = get_binary_unit_and_denominator(total_bytes);
let used = data.used_bytes as f64 / denominator;
let total = total_bytes as f64 / denominator;
format!("{used:.1}{unit}/{total:.1}{unit}").into()
}
/// Convert memory info into a string representing a percentage.
#[inline]
fn memory_percentage_label(data: &MemData) -> Cow<'static, str> {
let total_bytes = data.total_bytes.get();
let percentage = data.used_bytes as f64 / total_bytes as f64 * 100.0;
format!("{percentage:3.0}%").into()
}
#[inline]
fn memory_label(data: &MemData, is_percentage: bool) -> Cow<'static, str> {
if is_percentage {
memory_percentage_label(data)
} else {
memory_fraction_label(data)
}
}
impl Painter {
pub fn draw_basic_memory(
&self, f: &mut Frame<'_>, app_state: &mut App, draw_loc: Rect, widget_id: u64,
) {
let mem_data = &app_state.converted_data.mem_data;
let mut draw_widgets: Vec<PipeGauge<'_>> = Vec::new();
if app_state.current_widget.widget_id == widget_id {
f.render_widget(
Block::default()
.borders(SIDE_BORDERS)
.border_style(self.colours.highlighted_border_style),
widget_block(true, true, self.styles.border_type)
.border_style(self.styles.highlighted_border_style),
draw_loc,
);
}
let ram_percentage = if let Some(mem) = mem_data.last() {
mem.1
let data = app_state.data_store.get_data();
let (ram_percentage, ram_label) = if let Some(ram_harvest) = &data.ram_harvest {
(
ram_harvest.percentage(),
memory_label(ram_harvest, app_state.basic_mode_use_percent),
)
} else {
0.0
};
const EMPTY_MEMORY_FRAC_STRING: &str = "0.0B/0.0B";
let memory_fraction_label =
if let Some((_, label_frac)) = &app_state.converted_data.mem_labels {
(
0.0,
if app_state.basic_mode_use_percent {
format!("{:3.0}%", ram_percentage.round())
"0.0B/0.0B".into()
} else {
label_frac.trim().to_string()
}
} else {
EMPTY_MEMORY_FRAC_STRING.to_string()
};
" 0%".into()
},
)
};
draw_widgets.push(
PipeGauge::default()
.ratio(ram_percentage / 100.0)
.start_label("RAM")
.inner_label(memory_fraction_label)
.label_style(self.colours.ram_style)
.gauge_style(self.colours.ram_style),
.inner_label(ram_label)
.label_style(self.styles.ram_style)
.gauge_style(self.styles.ram_style),
);
if let Some(swap_harvest) = &data.swap_harvest {
let swap_percentage = swap_harvest.percentage();
let swap_label = memory_label(swap_harvest, app_state.basic_mode_use_percent);
draw_widgets.push(
PipeGauge::default()
.ratio(swap_percentage / 100.0)
.start_label("SWP")
.inner_label(swap_label)
.label_style(self.styles.swap_style)
.gauge_style(self.styles.swap_style),
);
}
#[cfg(not(target_os = "windows"))]
{
if let Some((_, label_frac)) = &app_state.converted_data.cache_labels {
let cache_data = &app_state.converted_data.cache_data;
if let Some(cache_harvest) = &data.cache_harvest {
let cache_percentage = cache_harvest.percentage();
let cache_fraction_label =
memory_label(cache_harvest, app_state.basic_mode_use_percent);
let cache_percentage = if let Some(cache) = cache_data.last() {
cache.1
} else {
0.0
};
let cache_fraction_label = if app_state.basic_mode_use_percent {
format!("{:3.0}%", cache_percentage.round())
} else {
label_frac.trim().to_string()
};
draw_widgets.push(
PipeGauge::default()
.ratio(cache_percentage / 100.0)
.start_label("CHE")
.inner_label(cache_fraction_label)
.label_style(self.colours.cache_style)
.gauge_style(self.colours.cache_style),
.label_style(self.styles.cache_style)
.gauge_style(self.styles.cache_style),
);
}
}
let swap_data = &app_state.converted_data.swap_data;
let swap_percentage = if let Some(swap) = swap_data.last() {
swap.1
} else {
0.0
};
if let Some((_, label_frac)) = &app_state.converted_data.swap_labels {
let swap_fraction_label = if app_state.basic_mode_use_percent {
format!("{:3.0}%", swap_percentage.round())
} else {
label_frac.trim().to_string()
};
draw_widgets.push(
PipeGauge::default()
.ratio(swap_percentage / 100.0)
.start_label("SWP")
.inner_label(swap_fraction_label)
.label_style(self.colours.swap_style)
.gauge_style(self.colours.swap_style),
);
}
#[cfg(feature = "zfs")]
{
let arc_data = &app_state.converted_data.arc_data;
let arc_percentage = if let Some(arc) = arc_data.last() {
arc.1
} else {
0.0
};
if let Some((_, label_frac)) = &app_state.converted_data.arc_labels {
let arc_fraction_label = if app_state.basic_mode_use_percent {
format!("{:3.0}%", arc_percentage.round())
} else {
label_frac.trim().to_string()
};
if let Some(arc_harvest) = &data.arc_harvest {
let arc_percentage = arc_harvest.percentage();
let arc_fraction_label =
memory_label(arc_harvest, app_state.basic_mode_use_percent);
draw_widgets.push(
PipeGauge::default()
.ratio(arc_percentage / 100.0)
.start_label("ARC")
.inner_label(arc_fraction_label)
.label_style(self.colours.arc_style)
.gauge_style(self.colours.arc_style),
.label_style(self.styles.arc_style)
.gauge_style(self.styles.arc_style),
);
}
}
#[cfg(feature = "gpu")]
{
if let Some(gpu_data) = &app_state.converted_data.gpu_data {
let gpu_styles = &self.colours.gpu_colours;
let mut color_index = 0;
let gpu_styles = &self.styles.gpu_colours;
let mut colour_index = 0;
gpu_data.iter().for_each(|gpu_data_vec| {
let gpu_data = gpu_data_vec.points.as_slice();
let gpu_percentage = if let Some(gpu) = gpu_data.last() {
gpu.1
for (_, harvest) in data.gpu_harvest.iter() {
let percentage = harvest.percentage();
let label = memory_label(harvest, app_state.basic_mode_use_percent);
let style = {
if gpu_styles.is_empty() {
tui::style::Style::default()
} else {
0.0
};
let trimmed_gpu_frac = {
if app_state.basic_mode_use_percent {
format!("{:3.0}%", gpu_percentage.round())
} else {
gpu_data_vec.mem_total.trim().to_string()
}
};
let style = {
if gpu_styles.is_empty() {
tui::style::Style::default()
} else if color_index >= gpu_styles.len() {
// cycle styles
color_index = 1;
gpu_styles[color_index - 1]
} else {
color_index += 1;
gpu_styles[color_index - 1]
}
};
draw_widgets.push(
PipeGauge::default()
.ratio(gpu_percentage / 100.0)
.start_label("GPU")
.inner_label(trimmed_gpu_frac)
.label_style(style)
.gauge_style(style),
);
});
let colour = gpu_styles[colour_index % gpu_styles.len()];
colour_index += 1;
colour
}
};
draw_widgets.push(
PipeGauge::default()
.ratio(percentage / 100.0)
.start_label("GPU")
.inner_label(label)
.label_style(style)
.gauge_style(style),
);
}
}

View File

@ -1,116 +1,169 @@
use std::borrow::Cow;
use std::{borrow::Cow, time::Instant};
use tui::{
Frame,
layout::{Constraint, Rect},
style::Style,
symbols::Marker,
terminal::Frame,
};
use crate::{
app::App,
app::{App, data::Values},
canvas::{
components::time_graph::{GraphData, TimeGraph},
drawing_utils::should_hide_x_label,
Painter,
components::time_graph::{AxisBound, GraphData, TimeGraph},
drawing_utils::should_hide_x_label,
},
collection::memory::MemData,
get_binary_unit_and_denominator,
};
/// Convert memory info into a combined memory label.
#[inline]
fn memory_legend_label(name: &str, data: Option<&MemData>) -> String {
if let Some(data) = data {
let total_bytes = data.total_bytes.get();
let percentage = data.used_bytes as f64 / total_bytes as f64 * 100.0;
let (unit, denominator) = get_binary_unit_and_denominator(total_bytes);
let used = data.used_bytes as f64 / denominator;
let total = total_bytes as f64 / denominator;
format!("{name}:{percentage:3.0}% {used:.1}{unit}/{total:.1}{unit}")
} else {
format!("{name}: 0% 0.0B/0.0B")
}
}
/// Get graph data.
#[inline]
fn graph_data<'a>(
out: &mut Vec<GraphData<'a>>, name: &str, last_harvest: Option<&'a MemData>,
time: &'a [Instant], values: &'a Values, style: Style,
) {
if !values.no_elements() {
let label = memory_legend_label(name, last_harvest).into();
out.push(
GraphData::default()
.name(label)
.time(time)
.values(values)
.style(style),
);
}
}
impl Painter {
pub fn draw_memory_graph(
&self, f: &mut Frame<'_>, app_state: &mut App, draw_loc: Rect, widget_id: u64,
) {
const Y_BOUNDS: [f64; 2] = [0.0, 100.5];
const Y_BOUNDS: AxisBound = AxisBound::Max(100.5);
const Y_LABELS: [Cow<'static, str>; 2] = [Cow::Borrowed(" 0%"), Cow::Borrowed("100%")];
if let Some(mem_widget_state) = app_state.states.mem_state.widget_states.get_mut(&widget_id)
{
if let Some(mem_state) = app_state.states.mem_state.widget_states.get_mut(&widget_id) {
let border_style = self.get_border_style(widget_id, app_state.current_widget.widget_id);
let x_bounds = [0, mem_widget_state.current_display_time];
let x_min = -(mem_state.current_display_time as f64);
let hide_x_labels = should_hide_x_label(
app_state.app_config_fields.hide_time,
app_state.app_config_fields.autohide_time,
&mut mem_widget_state.autohide_timer,
&mut mem_state.autohide_timer,
draw_loc,
);
let points = {
let graph_data = {
let mut size = 1;
if app_state.converted_data.swap_labels.is_some() {
let data = app_state.data_store.get_data();
// TODO: is this optimization really needed...? This just pre-allocates a vec, but it'll probably never
// be that big...
if data.swap_harvest.is_some() {
size += 1; // add capacity for SWAP
}
#[cfg(feature = "zfs")]
{
if app_state.converted_data.arc_labels.is_some() {
if data.arc_harvest.is_some() {
size += 1; // add capacity for ARC
}
}
#[cfg(feature = "gpu")]
{
if let Some(gpu_data) = &app_state.converted_data.gpu_data {
size += gpu_data.len(); // add row(s) for gpu
}
size += data.gpu_harvest.len(); // add row(s) for gpu
}
let mut points = Vec::with_capacity(size);
if let Some((label_percent, label_frac)) = &app_state.converted_data.mem_labels {
let mem_label = format!("RAM:{label_percent}{label_frac}");
points.push(GraphData {
points: &app_state.converted_data.mem_data,
style: self.colours.ram_style,
name: Some(mem_label.into()),
});
}
let timeseries = &data.timeseries_data;
let time = &timeseries.time;
// TODO: Add a "no data" option here/to time graph if there is no entries
graph_data(
&mut points,
"RAM",
data.ram_harvest.as_ref(),
time,
&timeseries.ram,
self.styles.ram_style,
);
graph_data(
&mut points,
"SWP",
data.swap_harvest.as_ref(),
time,
&timeseries.swap,
self.styles.swap_style,
);
#[cfg(not(target_os = "windows"))]
if let Some((label_percent, label_frac)) = &app_state.converted_data.cache_labels {
let cache_label = format!("CHE:{label_percent}{label_frac}");
points.push(GraphData {
points: &app_state.converted_data.cache_data,
style: self.colours.cache_style,
name: Some(cache_label.into()),
});
}
if let Some((label_percent, label_frac)) = &app_state.converted_data.swap_labels {
let swap_label = format!("SWP:{label_percent}{label_frac}");
points.push(GraphData {
points: &app_state.converted_data.swap_data,
style: self.colours.swap_style,
name: Some(swap_label.into()),
});
{
graph_data(
&mut points,
"CACHE", // TODO: Figure out how to line this up better
data.cache_harvest.as_ref(),
time,
&timeseries.cache_mem,
self.styles.cache_style,
);
}
#[cfg(feature = "zfs")]
if let Some((label_percent, label_frac)) = &app_state.converted_data.arc_labels {
let arc_label = format!("ARC:{label_percent}{label_frac}");
points.push(GraphData {
points: &app_state.converted_data.arc_data,
style: self.colours.arc_style,
name: Some(arc_label.into()),
});
{
graph_data(
&mut points,
"ARC",
data.arc_harvest.as_ref(),
time,
&timeseries.arc_mem,
self.styles.arc_style,
);
}
#[cfg(feature = "gpu")]
{
if let Some(gpu_data) = &app_state.converted_data.gpu_data {
let mut color_index = 0;
let gpu_styles = &self.colours.gpu_colours;
gpu_data.iter().for_each(|gpu| {
let gpu_label =
format!("{}:{}{}", gpu.name, gpu.mem_percent, gpu.mem_total);
let mut colour_index = 0;
let gpu_styles = &self.styles.gpu_colours;
for (name, harvest) in &data.gpu_harvest {
if let Some(gpu_data) = data.timeseries_data.gpu_mem.get(name) {
let style = {
if gpu_styles.is_empty() {
tui::style::Style::default()
} else if color_index >= gpu_styles.len() {
// cycle styles
color_index = 1;
gpu_styles[color_index - 1]
Style::default()
} else {
color_index += 1;
gpu_styles[color_index - 1]
let colour = gpu_styles[colour_index % gpu_styles.len()];
colour_index += 1;
colour
}
};
points.push(GraphData {
points: gpu.points.as_slice(),
graph_data(
&mut points,
name, // TODO: REALLY figure out how to line this up better
Some(harvest),
time,
gpu_data,
style,
name: Some(gpu_label.into()),
});
});
);
}
}
}
@ -124,20 +177,23 @@ impl Painter {
};
TimeGraph {
x_bounds,
x_min,
hide_x_labels,
y_bounds: Y_BOUNDS,
y_labels: &Y_LABELS,
graph_style: self.colours.graph_style,
graph_style: self.styles.graph_style,
border_style,
border_type: self.styles.border_type,
title: " Memory ".into(),
is_selected: app_state.current_widget.widget_id == widget_id,
is_expanded: app_state.is_expanded,
title_style: self.colours.widget_title_style,
title_style: self.styles.widget_title_style,
legend_position: app_state.app_config_fields.memory_legend_position,
legend_constraints: Some((Constraint::Ratio(3, 4), Constraint::Ratio(3, 4))),
marker,
scaling: Default::default(),
}
.draw_time_graph(f, draw_loc, &points);
.draw_time_graph(f, draw_loc, graph_data);
}
if app_state.should_get_widget_bounds() {

View File

@ -1,11 +1,15 @@
use tui::{
Frame,
layout::{Constraint, Direction, Layout, Rect},
terminal::Frame,
text::{Line, Span},
widgets::{Block, Paragraph},
};
use crate::{app::App, canvas::Painter, constants::*};
use crate::{
app::App,
canvas::{Painter, drawing_utils::widget_block},
utils::data_units::{convert_bits, get_unit_prefix},
};
impl Painter {
pub fn draw_basic_network(
@ -30,26 +34,32 @@ impl Painter {
if app_state.current_widget.widget_id == widget_id {
f.render_widget(
Block::default()
.borders(SIDE_BORDERS)
.border_style(self.colours.highlighted_border_style),
widget_block(true, true, self.styles.border_type)
.border_style(self.styles.highlighted_border_style),
draw_loc,
);
}
let rx_label = format!("RX: {}", app_state.converted_data.rx_display);
let tx_label = format!("TX: {}", app_state.converted_data.tx_display);
let total_rx_label = format!("Total RX: {}", app_state.converted_data.total_rx_display);
let total_tx_label = format!("Total TX: {}", app_state.converted_data.total_tx_display);
let use_binary_prefix = app_state.app_config_fields.network_use_binary_prefix;
let network_data = &(app_state.data_store.get_data().network_harvest);
let rx = get_unit_prefix(network_data.rx, use_binary_prefix);
let tx = get_unit_prefix(network_data.tx, use_binary_prefix);
let total_rx = convert_bits(network_data.total_rx, use_binary_prefix);
let total_tx = convert_bits(network_data.total_tx, use_binary_prefix);
let rx_label = format!("RX: {:.1}{}", rx.0, rx.1);
let tx_label = format!("TX: {:.1}{}", tx.0, tx.1);
let total_rx_label = format!("Total RX: {:.1}{}", total_rx.0, total_rx.1);
let total_tx_label = format!("Total TX: {:.1}{}", total_tx.0, total_tx.1);
let net_text = vec![
Line::from(Span::styled(rx_label, self.colours.rx_style)),
Line::from(Span::styled(tx_label, self.colours.tx_style)),
Line::from(Span::styled(rx_label, self.styles.rx_style)),
Line::from(Span::styled(tx_label, self.styles.tx_style)),
];
let total_net_text = vec![
Line::from(Span::styled(total_rx_label, self.colours.total_rx_style)),
Line::from(Span::styled(total_tx_label, self.colours.total_tx_style)),
Line::from(Span::styled(total_rx_label, self.styles.total_rx_style)),
Line::from(Span::styled(total_tx_label, self.styles.total_tx_style)),
];
f.render_widget(Paragraph::new(net_text).block(Block::default()), net_loc[0]);

View File

@ -1,22 +1,25 @@
use std::time::Duration;
use tui::{
Frame,
layout::{Constraint, Direction, Layout, Rect},
symbols::Marker,
terminal::Frame,
text::Text,
widgets::{Block, Borders, Row, Table},
};
use crate::{
app::{App, AxisScaling},
app::{App, AppConfigFields, AxisScaling},
canvas::{
components::{
time_chart::Point,
time_graph::{GraphData, TimeGraph},
},
drawing_utils::should_hide_x_label,
Painter,
components::time_graph::{AxisBound, ChartScaling, GraphData, TimeGraph},
drawing_utils::should_hide_x_label,
},
utils::{data_prefixes::*, data_units::DataUnit, general::partial_ordering},
utils::{
data_units::*,
general::{saturating_log2, saturating_log10},
},
widgets::NetWidgetHeightCache,
};
impl Painter {
@ -54,16 +57,19 @@ impl Painter {
pub fn draw_network_graph(
&self, f: &mut Frame<'_>, app_state: &mut App, draw_loc: Rect, widget_id: u64,
hide_legend: bool,
full_screen: bool,
) {
if let Some(network_widget_state) =
app_state.states.net_state.widget_states.get_mut(&widget_id)
{
let network_data_rx = &app_state.converted_data.network_data_rx;
let network_data_tx = &app_state.converted_data.network_data_tx;
let shared_data = app_state.data_store.get_data();
let network_latest_data = &(shared_data.network_harvest);
let rx_points = &(shared_data.timeseries_data.rx);
let tx_points = &(shared_data.timeseries_data.tx);
let time = &(shared_data.timeseries_data.time);
let time_start = -(network_widget_state.current_display_time as f64);
let border_style = self.get_border_style(widget_id, app_state.current_widget.widget_id);
let x_bounds = [0, network_widget_state.current_display_time];
let hide_x_labels = should_hide_x_label(
app_state.app_config_fields.hide_time,
app_state.app_config_fields.autohide_time,
@ -71,79 +77,131 @@ impl Painter {
draw_loc,
);
// TODO: Cache network results: Only update if:
// - Force update (includes time interval change)
// - Old max time is off screen
// - A new time interval is better and does not fit (check from end of vector to
// last checked; we only want to update if it is TOO big!)
let y_max = {
if let Some(last_time) = time.last() {
// For now, just do it each time. Might want to cache this later though.
// Find the maximal rx/tx so we know how to scale, and return it.
let (_best_time, max_entry) = get_max_entry(
network_data_rx,
network_data_tx,
time_start,
&app_state.app_config_fields.network_scale_type,
app_state.app_config_fields.network_use_binary_prefix,
);
let (mut biggest, mut biggest_time, first_time) = {
let initial_first_time = *last_time
- Duration::from_millis(network_widget_state.current_display_time);
let (max_range, labels) = adjust_network_data_point(
max_entry,
&app_state.app_config_fields.network_scale_type,
&app_state.app_config_fields.network_unit_type,
app_state.app_config_fields.network_use_binary_prefix,
);
match &network_widget_state.height_cache {
Some(NetWidgetHeightCache {
best_point,
right_edge,
period,
}) => {
if *period != network_widget_state.current_display_time
|| best_point.0 < initial_first_time
{
(0.0, initial_first_time, initial_first_time)
} else {
(best_point.1, best_point.0, *right_edge)
}
}
None => (0.0, initial_first_time, initial_first_time),
}
};
let y_labels = labels.iter().map(|label| label.into()).collect::<Vec<_>>();
let y_bounds = [0.0, max_range];
for (&time, &v) in rx_points
.iter_along_base(time)
.rev()
.take_while(|&(&time, _)| time >= first_time)
{
if v > biggest {
biggest = v;
biggest_time = time;
}
}
let legend_constraints = if hide_legend {
for (&time, &v) in tx_points
.iter_along_base(time)
.rev()
.take_while(|&(&time, _)| time >= first_time)
{
if v > biggest {
biggest = v;
biggest_time = time;
}
}
network_widget_state.height_cache = Some(NetWidgetHeightCache {
best_point: (biggest_time, biggest),
right_edge: *last_time,
period: network_widget_state.current_display_time,
});
biggest
} else {
0.0
}
};
let (y_max, y_labels) = adjust_network_data_point(y_max, &app_state.app_config_fields);
let y_bounds = AxisBound::Max(y_max);
let legend_constraints = if full_screen {
(Constraint::Ratio(0, 1), Constraint::Ratio(0, 1))
} else {
(Constraint::Ratio(1, 1), Constraint::Ratio(3, 4))
};
// TODO: Add support for clicking on legend to only show that value on chart.
let points = if app_state.app_config_fields.use_old_network_legend && !hide_legend {
let use_binary_prefix = app_state.app_config_fields.network_use_binary_prefix;
let unit_type = app_state.app_config_fields.network_unit_type;
let unit = match unit_type {
DataUnit::Byte => "B/s",
DataUnit::Bit => "b/s",
};
let rx = get_unit_prefix(network_latest_data.rx, use_binary_prefix);
let tx = get_unit_prefix(network_latest_data.tx, use_binary_prefix);
let total_rx = convert_bits(network_latest_data.total_rx, use_binary_prefix);
let total_tx = convert_bits(network_latest_data.total_tx, use_binary_prefix);
// TODO: This behaviour is pretty weird, we should probably just make it so if you use old network legend
// you don't do whatever this is...
let graph_data = if app_state.app_config_fields.use_old_network_legend && !full_screen {
let rx_label = format!("RX: {:.1}{}{}", rx.0, rx.1, unit);
let tx_label = format!("TX: {:.1}{}{}", tx.0, tx.1, unit);
let total_rx_label = format!("Total RX: {:.1}{}", total_rx.0, total_rx.1);
let total_tx_label = format!("Total TX: {:.1}{}", total_tx.0, total_tx.1);
vec![
GraphData {
points: network_data_rx,
style: self.colours.rx_style,
name: Some(format!("RX: {:7}", app_state.converted_data.rx_display).into()),
},
GraphData {
points: network_data_tx,
style: self.colours.tx_style,
name: Some(format!("TX: {:7}", app_state.converted_data.tx_display).into()),
},
GraphData {
points: &[],
style: self.colours.total_rx_style,
name: Some(
format!("Total RX: {:7}", app_state.converted_data.total_rx_display)
.into(),
),
},
GraphData {
points: &[],
style: self.colours.total_tx_style,
name: Some(
format!("Total TX: {:7}", app_state.converted_data.total_tx_display)
.into(),
),
},
GraphData::default()
.name(rx_label.into())
.time(time)
.values(rx_points)
.style(self.styles.rx_style),
GraphData::default()
.name(tx_label.into())
.time(time)
.values(tx_points)
.style(self.styles.tx_style),
GraphData::default()
.style(self.styles.total_rx_style)
.name(total_rx_label.into()),
GraphData::default()
.style(self.styles.total_tx_style)
.name(total_tx_label.into()),
]
} else {
let rx_label = format!("{:.1}{}{}", rx.0, rx.1, unit);
let tx_label = format!("{:.1}{}{}", tx.0, tx.1, unit);
let total_rx_label = format!("{:.1}{}", total_rx.0, total_rx.1);
let total_tx_label = format!("{:.1}{}", total_tx.0, total_tx.1);
vec![
GraphData {
points: network_data_rx,
style: self.colours.rx_style,
name: Some((&app_state.converted_data.rx_display).into()),
},
GraphData {
points: network_data_tx,
style: self.colours.tx_style,
name: Some((&app_state.converted_data.tx_display).into()),
},
GraphData::default()
.name(format!("RX: {:<10} All: {}", rx_label, total_rx_label).into())
.time(time)
.values(rx_points)
.style(self.styles.rx_style),
GraphData::default()
.name(format!("TX: {:<10} All: {}", tx_label, total_tx_label).into())
.time(time)
.values(tx_points)
.style(self.styles.tx_style),
]
};
@ -153,21 +211,36 @@ impl Painter {
Marker::Braille
};
let scaling = match app_state.app_config_fields.network_scale_type {
AxisScaling::Log => {
// TODO: I might change this behaviour later.
if app_state.app_config_fields.network_use_binary_prefix {
ChartScaling::Log2
} else {
ChartScaling::Log10
}
}
AxisScaling::Linear => ChartScaling::Linear,
};
TimeGraph {
x_bounds,
x_min: time_start,
hide_x_labels,
y_bounds,
y_labels: &y_labels,
graph_style: self.colours.graph_style,
y_labels: &(y_labels.into_iter().map(Into::into).collect::<Vec<_>>()),
graph_style: self.styles.graph_style,
border_style,
border_type: self.styles.border_type,
title: " Network ".into(),
is_selected: app_state.current_widget.widget_id == widget_id,
is_expanded: app_state.is_expanded,
title_style: self.colours.widget_title_style,
title_style: self.styles.widget_title_style,
legend_position: app_state.app_config_fields.network_legend_position,
legend_constraints: Some(legend_constraints),
marker,
scaling,
}
.draw_time_graph(f, draw_loc, &points);
.draw_time_graph(f, draw_loc, graph_data);
}
}
@ -176,17 +249,31 @@ impl Painter {
) {
const NETWORK_HEADERS: [&str; 4] = ["RX", "TX", "Total RX", "Total TX"];
let rx_display = &app_state.converted_data.rx_display;
let tx_display = &app_state.converted_data.tx_display;
let total_rx_display = &app_state.converted_data.total_rx_display;
let total_tx_display = &app_state.converted_data.total_tx_display;
let network_latest_data = &(app_state.data_store.get_data().network_harvest);
let use_binary_prefix = app_state.app_config_fields.network_use_binary_prefix;
let unit_type = app_state.app_config_fields.network_unit_type;
let unit = match unit_type {
DataUnit::Byte => "B/s",
DataUnit::Bit => "b/s",
};
let rx = get_unit_prefix(network_latest_data.rx, use_binary_prefix);
let tx = get_unit_prefix(network_latest_data.tx, use_binary_prefix);
let rx_label = format!("{:.1}{}{}", rx.0, rx.1, unit);
let tx_label = format!("{:.1}{}{}", tx.0, tx.1, unit);
let total_rx = convert_bits(network_latest_data.total_rx, use_binary_prefix);
let total_tx = convert_bits(network_latest_data.total_tx, use_binary_prefix);
let total_rx_label = format!("{:.1}{}", total_rx.0, total_rx.1);
let total_tx_label = format!("{:.1}{}", total_tx.0, total_tx.1);
// Gross but I need it to work...
let total_network = vec![Row::new([
Text::styled(rx_display, self.colours.rx_style),
Text::styled(tx_display, self.colours.tx_style),
Text::styled(total_rx_display, self.colours.total_rx_style),
Text::styled(total_tx_display, self.colours.total_tx_style),
Text::styled(rx_label, self.styles.rx_style),
Text::styled(tx_label, self.styles.tx_style),
Text::styled(total_rx_label, self.styles.total_rx_style),
Text::styled(total_tx_label, self.styles.total_tx_style),
])];
// Draw
@ -198,147 +285,25 @@ impl Painter {
.map(Constraint::Length)
.collect::<Vec<_>>()),
)
.header(Row::new(NETWORK_HEADERS).style(self.colours.table_header_style))
.header(Row::new(NETWORK_HEADERS).style(self.styles.table_header_style))
.block(Block::default().borders(Borders::ALL).border_style(
if app_state.current_widget.widget_id == widget_id {
self.colours.highlighted_border_style
self.styles.highlighted_border_style
} else {
self.colours.border_style
self.styles.border_style
},
))
.style(self.colours.text_style),
.style(self.styles.text_style),
draw_loc,
);
}
}
/// Returns the max data point and time given a time.
fn get_max_entry(
rx: &[Point], tx: &[Point], time_start: f64, network_scale_type: &AxisScaling,
network_use_binary_prefix: bool,
) -> Point {
/// Determines a "fake" max value in circumstances where we couldn't find
/// one from the data.
fn calculate_missing_max(
network_scale_type: &AxisScaling, network_use_binary_prefix: bool,
) -> f64 {
match network_scale_type {
AxisScaling::Log => {
if network_use_binary_prefix {
LOG_KIBI_LIMIT
} else {
LOG_KILO_LIMIT
}
}
AxisScaling::Linear => {
if network_use_binary_prefix {
KIBI_LIMIT_F64
} else {
KILO_LIMIT_F64
}
}
}
}
// First, let's shorten our ranges to actually look. We can abuse the fact that
// our rx and tx arrays are sorted, so we can short-circuit our search to
// filter out only the relevant data points...
let filtered_rx = if let (Some(rx_start), Some(rx_end)) = (
rx.iter().position(|(time, _data)| *time >= time_start),
rx.iter().rposition(|(time, _data)| *time <= 0.0),
) {
Some(&rx[rx_start..=rx_end])
} else {
None
};
let filtered_tx = if let (Some(tx_start), Some(tx_end)) = (
tx.iter().position(|(time, _data)| *time >= time_start),
tx.iter().rposition(|(time, _data)| *time <= 0.0),
) {
Some(&tx[tx_start..=tx_end])
} else {
None
};
// Then, find the maximal rx/tx so we know how to scale, and return it.
match (filtered_rx, filtered_tx) {
(None, None) => (
time_start,
calculate_missing_max(network_scale_type, network_use_binary_prefix),
),
(None, Some(filtered_tx)) => {
match filtered_tx
.iter()
.max_by(|(_, data_a), (_, data_b)| partial_ordering(data_a, data_b))
{
Some((best_time, max_val)) => {
if *max_val == 0.0 {
(
time_start,
calculate_missing_max(network_scale_type, network_use_binary_prefix),
)
} else {
(*best_time, *max_val)
}
}
None => (
time_start,
calculate_missing_max(network_scale_type, network_use_binary_prefix),
),
}
}
(Some(filtered_rx), None) => {
match filtered_rx
.iter()
.max_by(|(_, data_a), (_, data_b)| partial_ordering(data_a, data_b))
{
Some((best_time, max_val)) => {
if *max_val == 0.0 {
(
time_start,
calculate_missing_max(network_scale_type, network_use_binary_prefix),
)
} else {
(*best_time, *max_val)
}
}
None => (
time_start,
calculate_missing_max(network_scale_type, network_use_binary_prefix),
),
}
}
(Some(filtered_rx), Some(filtered_tx)) => {
match filtered_rx
.iter()
.chain(filtered_tx)
.max_by(|(_, data_a), (_, data_b)| partial_ordering(data_a, data_b))
{
Some((best_time, max_val)) => {
if *max_val == 0.0 {
(
*best_time,
calculate_missing_max(network_scale_type, network_use_binary_prefix),
)
} else {
(*best_time, *max_val)
}
}
None => (
time_start,
calculate_missing_max(network_scale_type, network_use_binary_prefix),
),
}
}
}
}
/// Returns the required max data point and labels.
fn adjust_network_data_point(
max_entry: f64, network_scale_type: &AxisScaling, network_unit_type: &DataUnit,
network_use_binary_prefix: bool,
) -> (f64, Vec<String>) {
/// Returns the required labels.
///
/// TODO: This is _really_ ugly... also there might be a bug with certain heights and too many labels.
/// We may need to take draw height into account, either here, or in the time graph itself.
fn adjust_network_data_point(max_entry: f64, config: &AppConfigFields) -> (f64, Vec<String>) {
// So, we're going with an approach like this for linear data:
// - Main goal is to maximize the amount of information displayed given a
// specific height. We don't want to drown out some data if the ranges are too
@ -351,9 +316,9 @@ fn adjust_network_data_point(
// drew 4 segments, it would be 97.5, 195, 292.5, 390, and
// probably something like 438.75?
//
// So, how do we do this in ratatui? Well, if we are using intervals that tie
// So, how do we do this in ratatui? Well, if we are using intervals that tie
// in perfectly to the max value we want... then it's actually not that
// hard. Since ratatui accepts a vector as labels and will properly space
// hard. Since ratatui accepts a vector as labels and will properly space
// them all out... we just work with that and space it out properly.
//
// Dynamic chart idea based off of FreeNAS's chart design.
@ -366,14 +331,18 @@ fn adjust_network_data_point(
// Now just check the largest unit we correspond to... then proceed to build
// some entries from there!
let scale_type = config.network_scale_type;
let use_binary_prefix = config.network_use_binary_prefix;
let network_unit_type = config.network_unit_type;
let unit_char = match network_unit_type {
DataUnit::Byte => "B",
DataUnit::Bit => "b",
};
match network_scale_type {
match scale_type {
AxisScaling::Linear => {
let (k_limit, m_limit, g_limit, t_limit) = if network_use_binary_prefix {
let (k_limit, m_limit, g_limit, t_limit) = if use_binary_prefix {
(
KIBI_LIMIT_F64,
MEBI_LIMIT_F64,
@ -389,32 +358,32 @@ fn adjust_network_data_point(
)
};
let bumped_max_entry = max_entry * 1.5; // We use the bumped up version to calculate our unit type.
let max_entry_upper = max_entry * 1.5; // We use the bumped up version to calculate our unit type.
let (max_value_scaled, unit_prefix, unit_type): (f64, &str, &str) =
if bumped_max_entry < k_limit {
if max_entry_upper < k_limit {
(max_entry, "", unit_char)
} else if bumped_max_entry < m_limit {
} else if max_entry_upper < m_limit {
(
max_entry / k_limit,
if network_use_binary_prefix { "Ki" } else { "K" },
if use_binary_prefix { "Ki" } else { "K" },
unit_char,
)
} else if bumped_max_entry < g_limit {
} else if max_entry_upper < g_limit {
(
max_entry / m_limit,
if network_use_binary_prefix { "Mi" } else { "M" },
if use_binary_prefix { "Mi" } else { "M" },
unit_char,
)
} else if bumped_max_entry < t_limit {
} else if max_entry_upper < t_limit {
(
max_entry / g_limit,
if network_use_binary_prefix { "Gi" } else { "G" },
if use_binary_prefix { "Gi" } else { "G" },
unit_char,
)
} else {
(
max_entry / t_limit,
if network_use_binary_prefix { "Ti" } else { "T" },
if use_binary_prefix { "Ti" } else { "T" },
unit_char,
)
};
@ -422,7 +391,6 @@ fn adjust_network_data_point(
// Finally, build an acceptable range starting from there, using the given
// height! Note we try to put more of a weight on the bottom section
// vs. the top, since the top has less data.
let base_unit = max_value_scaled;
let labels: Vec<String> = vec![
format!("0{unit_prefix}{unit_type}"),
@ -431,19 +399,29 @@ fn adjust_network_data_point(
format!("{:.1}", base_unit * 1.5),
]
.into_iter()
.map(|s| format!("{s:>5}")) // Pull 5 as the longest legend value is generally going to be 5 digits (if they somehow
// hit over 5 terabits per second)
.map(|s| {
// Pull 5 as the longest legend value is generally going to be 5 digits (if they somehow hit over 5 terabits per second)
format!("{s:>5}")
})
.collect();
(bumped_max_entry, labels)
(max_entry_upper, labels)
}
AxisScaling::Log => {
let (m_limit, g_limit, t_limit) = if network_use_binary_prefix {
let (m_limit, g_limit, t_limit) = if use_binary_prefix {
(LOG_MEBI_LIMIT, LOG_GIBI_LIMIT, LOG_TEBI_LIMIT)
} else {
(LOG_MEGA_LIMIT, LOG_GIGA_LIMIT, LOG_TERA_LIMIT)
};
// Remember to do saturating log checks as otherwise 0.0 becomes inf, and you get
// gaps!
let max_entry = if use_binary_prefix {
saturating_log2(max_entry)
} else {
saturating_log10(max_entry)
};
fn get_zero(network_use_binary_prefix: bool, unit_char: &str) -> String {
format!(
"{}0{}",
@ -496,47 +474,47 @@ fn adjust_network_data_point(
(
m_limit,
vec![
get_zero(network_use_binary_prefix, unit_char),
get_k(network_use_binary_prefix, unit_char),
get_m(network_use_binary_prefix, unit_char),
get_zero(use_binary_prefix, unit_char),
get_k(use_binary_prefix, unit_char),
get_m(use_binary_prefix, unit_char),
],
)
} else if max_entry < g_limit {
(
g_limit,
vec![
get_zero(network_use_binary_prefix, unit_char),
get_k(network_use_binary_prefix, unit_char),
get_m(network_use_binary_prefix, unit_char),
get_g(network_use_binary_prefix, unit_char),
get_zero(use_binary_prefix, unit_char),
get_k(use_binary_prefix, unit_char),
get_m(use_binary_prefix, unit_char),
get_g(use_binary_prefix, unit_char),
],
)
} else if max_entry < t_limit {
(
t_limit,
vec![
get_zero(network_use_binary_prefix, unit_char),
get_k(network_use_binary_prefix, unit_char),
get_m(network_use_binary_prefix, unit_char),
get_g(network_use_binary_prefix, unit_char),
get_t(network_use_binary_prefix, unit_char),
get_zero(use_binary_prefix, unit_char),
get_k(use_binary_prefix, unit_char),
get_m(use_binary_prefix, unit_char),
get_g(use_binary_prefix, unit_char),
get_t(use_binary_prefix, unit_char),
],
)
} else {
// I really doubt anyone's transferring beyond petabyte speeds...
(
if network_use_binary_prefix {
if use_binary_prefix {
LOG_PEBI_LIMIT
} else {
LOG_PETA_LIMIT
},
vec![
get_zero(network_use_binary_prefix, unit_char),
get_k(network_use_binary_prefix, unit_char),
get_m(network_use_binary_prefix, unit_char),
get_g(network_use_binary_prefix, unit_char),
get_t(network_use_binary_prefix, unit_char),
get_p(network_use_binary_prefix, unit_char),
get_zero(use_binary_prefix, unit_char),
get_k(use_binary_prefix, unit_char),
get_m(use_binary_prefix, unit_char),
get_g(use_binary_prefix, unit_char),
get_t(use_binary_prefix, unit_char),
get_p(use_binary_prefix, unit_char),
],
)
}

View File

@ -1,19 +1,19 @@
use tui::{
Frame,
layout::{Alignment, Constraint, Direction, Layout, Rect},
style::Style,
terminal::Frame,
text::{Line, Span},
widgets::{Block, Borders, Paragraph},
widgets::Paragraph,
};
use unicode_segmentation::UnicodeSegmentation;
use crate::{
app::{App, AppSearchState},
canvas::{
components::data_table::{DrawInfo, SelectionState},
Painter,
components::data_table::{DrawInfo, SelectionState},
drawing_utils::widget_block,
},
constants::*,
};
const SORT_MENU_WIDTH: u16 = 7;
@ -23,11 +23,11 @@ impl Painter {
/// - `widget_id` here represents the widget ID of the process widget
/// itself!
pub fn draw_process(
&self, f: &mut Frame<'_>, app_state: &mut App, draw_loc: Rect, draw_border: bool,
widget_id: u64,
&self, f: &mut Frame<'_>, app_state: &mut App, draw_loc: Rect, widget_id: u64,
) {
if let Some(proc_widget_state) = app_state.states.proc_state.widget_states.get(&widget_id) {
let search_height = if draw_border { 5 } else { 3 };
let is_basic = app_state.app_config_fields.use_basic_mode;
let search_height = if !is_basic { 5 } else { 3 };
let is_sort_open = proc_widget_state.is_sort_open;
let mut proc_draw_loc = draw_loc;
@ -38,13 +38,7 @@ impl Painter {
.split(draw_loc);
proc_draw_loc = processes_chunk[0];
self.draw_search_field(
f,
app_state,
processes_chunk[1],
draw_border,
widget_id + 1,
);
self.draw_search_field(f, app_state, processes_chunk[1], widget_id + 1);
}
if is_sort_open {
@ -110,8 +104,7 @@ impl Painter {
/// - `widget_id` represents the widget ID of the search box itself --- NOT
/// the process widget state that is stored.
fn draw_search_field(
&self, f: &mut Frame<'_>, app_state: &mut App, draw_loc: Rect, draw_border: bool,
widget_id: u64,
&self, f: &mut Frame<'_>, app_state: &mut App, draw_loc: Rect, widget_id: u64,
) {
fn build_query_span(
search_state: &AppSearchState, available_width: usize, is_on_widget: bool,
@ -157,16 +150,18 @@ impl Painter {
}
}
let is_basic = app_state.app_config_fields.use_basic_mode;
if let Some(proc_widget_state) = app_state
.states
.proc_state
.widget_states
.get_mut(&(widget_id - 1))
{
let is_on_widget = widget_id == app_state.current_widget.widget_id;
let is_selected = widget_id == app_state.current_widget.widget_id;
let num_columns = usize::from(draw_loc.width);
const SEARCH_TITLE: &str = "> ";
let offset = if draw_border { 4 } else { 2 }; // width of 3 removed for >_|
let offset = 4;
let available_width = if num_columns > (offset + 3) {
num_columns - offset
} else {
@ -182,18 +177,18 @@ impl Painter {
let query_with_cursor = build_query_span(
&proc_widget_state.proc_search.search_state,
available_width,
is_on_widget,
self.colours.selected_text_style,
self.colours.text_style,
is_selected,
self.styles.selected_text_style,
self.styles.text_style,
);
let mut search_text = vec![Line::from({
let mut search_vec = vec![Span::styled(
SEARCH_TITLE,
if is_on_widget {
self.colours.table_header_style
if is_selected {
self.styles.table_header_style
} else {
self.colours.text_style
self.styles.text_style
},
)];
search_vec.extend(query_with_cursor);
@ -203,21 +198,21 @@ impl Painter {
// Text options shamelessly stolen from VS Code.
let case_style = if !proc_widget_state.proc_search.is_ignoring_case {
self.colours.selected_text_style
self.styles.selected_text_style
} else {
self.colours.text_style
self.styles.text_style
};
let whole_word_style = if proc_widget_state.proc_search.is_searching_whole_word {
self.colours.selected_text_style
self.styles.selected_text_style
} else {
self.colours.text_style
self.styles.text_style
};
let regex_style = if proc_widget_state.proc_search.is_searching_with_regex {
self.colours.selected_text_style
self.styles.selected_text_style
} else {
self.colours.text_style
self.styles.text_style
};
// TODO: [MOUSE] Mouse support for these in search
@ -245,54 +240,42 @@ impl Painter {
} else {
""
},
self.colours.invalid_query_style,
self.styles.invalid_query_style,
)));
search_text.push(option_text);
let current_border_style =
if proc_widget_state.proc_search.search_state.is_invalid_search {
self.colours.invalid_query_style
} else if is_on_widget {
self.colours.highlighted_border_style
self.styles.invalid_query_style
} else if is_selected {
self.styles.highlighted_border_style
} else {
self.colours.border_style
self.styles.border_style
};
let title = Span::styled(
if draw_border {
const TITLE_BASE: &str = " Esc to close ";
let repeat_num =
usize::from(draw_loc.width).saturating_sub(TITLE_BASE.chars().count() + 2);
format!("{} Esc to close ", "".repeat(repeat_num))
} else {
String::new()
},
current_border_style,
);
let process_search_block = {
let mut block = widget_block(is_basic, is_selected, self.styles.border_type)
.border_style(current_border_style);
let process_search_block = if draw_border {
Block::default()
.title(title)
.borders(Borders::ALL)
.border_style(current_border_style)
} else if is_on_widget {
Block::default()
.borders(SIDE_BORDERS)
.border_style(current_border_style)
} else {
Block::default().borders(Borders::NONE)
if !is_basic {
block = block.title_top(
Line::styled(" Esc to close ", current_border_style).right_aligned(),
)
}
block
};
let margined_draw_loc = Layout::default()
.constraints([Constraint::Percentage(100)])
.horizontal_margin(u16::from(!(is_on_widget || draw_border)))
.horizontal_margin(u16::from(is_basic && !is_selected))
.direction(Direction::Horizontal)
.split(draw_loc)[0];
f.render_widget(
Paragraph::new(search_text)
.block(process_search_block)
.style(self.colours.text_style)
.style(self.styles.text_style)
.alignment(Alignment::Left),
margined_draw_loc,
);

View File

@ -1,10 +1,10 @@
use tui::{layout::Rect, terminal::Frame};
use tui::{Frame, layout::Rect};
use crate::{
app,
canvas::{
components::data_table::{DrawInfo, SelectionState},
Painter,
components::data_table::{DrawInfo, SelectionState},
},
};

View File

@ -1,11 +1,20 @@
//! This is the main file to house data collection functions.
//!
//! TODO: Rename this to intake? Collection?
#[cfg(feature = "nvidia")]
pub mod nvidia;
#[cfg(all(target_os = "linux", feature = "gpu"))]
pub mod amd;
#[cfg(target_os = "linux")]
mod linux {
pub mod utils;
}
#[cfg(feature = "battery")]
pub mod batteries;
pub mod cpu;
pub mod disks;
pub mod error;
@ -23,30 +32,30 @@ use processes::Pid;
#[cfg(feature = "battery")]
use starship_battery::{Battery, Manager};
use self::temperature::TemperatureType;
use super::DataFilters;
use crate::app::layout_manager::UsedWidgets;
// TODO: We can possibly re-use an internal buffer for this to reduce allocs.
#[derive(Clone, Debug)]
pub struct Data {
pub collection_time: Instant,
pub cpu: Option<cpu::CpuHarvest>,
pub load_avg: Option<cpu::LoadAvgHarvest>,
pub memory: Option<memory::MemHarvest>,
pub memory: Option<memory::MemData>,
#[cfg(not(target_os = "windows"))]
pub cache: Option<memory::MemHarvest>,
pub swap: Option<memory::MemHarvest>,
pub temperature_sensors: Option<Vec<temperature::TempHarvest>>,
pub cache: Option<memory::MemData>,
pub swap: Option<memory::MemData>,
pub temperature_sensors: Option<Vec<temperature::TempSensorData>>,
pub network: Option<network::NetworkHarvest>,
pub list_of_processes: Option<Vec<processes::ProcessHarvest>>,
pub disks: Option<Vec<disks::DiskHarvest>>,
pub io: Option<disks::IoHarvest>,
#[cfg(feature = "battery")]
pub list_of_batteries: Option<Vec<batteries::BatteryHarvest>>,
pub list_of_batteries: Option<Vec<batteries::BatteryData>>,
#[cfg(feature = "zfs")]
pub arc: Option<memory::MemHarvest>,
pub arc: Option<memory::MemData>,
#[cfg(feature = "gpu")]
pub gpu: Option<Vec<(String, memory::MemHarvest)>>,
pub gpu: Option<Vec<(String, memory::MemData)>>,
}
impl Default for Data {
@ -141,7 +150,6 @@ impl Default for SysinfoSource {
pub struct DataCollector {
pub data: Data,
sys: SysinfoSource,
temperature_type: TemperatureType,
use_current_cpu_total: bool,
unnormalized_cpu: bool,
last_collection_time: Instant,
@ -187,7 +195,6 @@ impl DataCollector {
prev_idle: 0_f64,
#[cfg(target_os = "linux")]
prev_non_idle: 0_f64,
temperature_type: TemperatureType::Celsius,
use_current_cpu_total: false,
unnormalized_cpu: false,
last_collection_time,
@ -234,14 +241,10 @@ impl DataCollector {
self.data.cleanup();
}
pub fn set_data_collection(&mut self, used_widgets: UsedWidgets) {
pub fn set_collection(&mut self, used_widgets: UsedWidgets) {
self.widgets_to_harvest = used_widgets;
}
pub fn set_temperature_type(&mut self, temperature_type: TemperatureType) {
self.temperature_type = temperature_type;
}
pub fn set_use_current_cpu_total(&mut self, use_current_cpu_total: bool) {
self.use_current_cpu_total = use_current_cpu_total;
}
@ -347,12 +350,14 @@ impl DataCollector {
#[inline]
fn update_gpus(&mut self) {
if self.widgets_to_harvest.use_gpu {
let mut local_gpu: Vec<(String, memory::MemData)> = Vec::new();
let mut local_gpu_pids: Vec<HashMap<u32, (u64, u32)>> = Vec::new();
let mut local_gpu_total_mem: u64 = 0;
#[cfg(feature = "nvidia")]
if let Some(data) = nvidia::get_nvidia_vecs(
&self.temperature_type,
&self.filters.temp_filter,
&self.widgets_to_harvest,
) {
if let Some(data) =
nvidia::get_nvidia_vecs(&self.filters.temp_filter, &self.widgets_to_harvest)
{
if let Some(mut temp) = data.temperature {
if let Some(sensors) = &mut self.data.temperature_sensors {
sensors.append(&mut temp);
@ -360,14 +365,31 @@ impl DataCollector {
self.data.temperature_sensors = Some(temp);
}
}
if let Some(mem) = data.memory {
self.data.gpu = Some(mem);
if let Some(mut mem) = data.memory {
local_gpu.append(&mut mem);
}
if let Some(proc) = data.procs {
self.gpu_pids = Some(proc.1);
self.gpus_total_mem = Some(proc.0);
if let Some(mut proc) = data.procs {
local_gpu_pids.append(&mut proc.1);
local_gpu_total_mem += proc.0;
}
}
#[cfg(target_os = "linux")]
if let Some(data) =
amd::get_amd_vecs(&self.widgets_to_harvest, self.last_collection_time)
{
if let Some(mut mem) = data.memory {
local_gpu.append(&mut mem);
}
if let Some(mut proc) = data.procs {
local_gpu_pids.append(&mut proc.1);
local_gpu_total_mem += proc.0;
}
}
self.data.gpu = (!local_gpu.is_empty()).then_some(local_gpu);
self.gpu_pids = (!local_gpu_pids.is_empty()).then_some(local_gpu_pids);
self.gpus_total_mem = (local_gpu_total_mem > 0).then_some(local_gpu_total_mem);
}
}
@ -400,18 +422,14 @@ impl DataCollector {
fn update_temps(&mut self) {
if self.widgets_to_harvest.use_temp {
#[cfg(not(target_os = "linux"))]
if let Ok(data) = temperature::get_temperature_data(
&self.sys.temps,
&self.temperature_type,
&self.filters.temp_filter,
) {
if let Ok(data) =
temperature::get_temperature_data(&self.sys.temps, &self.filters.temp_filter)
{
self.data.temperature_sensors = data;
}
#[cfg(target_os = "linux")]
if let Ok(data) =
temperature::get_temperature_data(&self.temperature_type, &self.filters.temp_filter)
{
if let Ok(data) = temperature::get_temperature_data(&self.filters.temp_filter) {
self.data.temperature_sensors = data;
}
}
@ -479,7 +497,7 @@ impl DataCollector {
#[inline]
fn total_memory(&self) -> u64 {
if let Some(memory) = &self.data.memory {
memory.total_bytes
memory.total_bytes.get()
} else {
self.sys.system.total_memory()
}

412
src/collection/amd.rs Normal file
View File

@ -0,0 +1,412 @@
mod amd_gpu_marketing;
use std::{
fs::{self, read_to_string},
num::NonZeroU64,
path::{Path, PathBuf},
sync::{LazyLock, Mutex},
time::{Duration, Instant},
};
use hashbrown::{HashMap, HashSet};
use crate::{app::layout_manager::UsedWidgets, collection::memory::MemData};
use super::linux::utils::is_device_awake;
// TODO: May be able to clean up some of these, Option<Vec> for example is a bit redundant.
pub struct AmdGpuData {
pub memory: Option<Vec<(String, MemData)>>,
pub procs: Option<(u64, Vec<HashMap<u32, (u64, u32)>>)>,
}
pub struct AmdGpuMemory {
pub total: u64,
pub used: u64,
}
#[derive(Debug, Clone, Default, Eq, PartialEq)]
pub struct AmdGpuProc {
pub vram_usage: u64,
pub gfx_usage: u64,
pub dma_usage: u64,
pub enc_usage: u64,
pub dec_usage: u64,
pub uvd_usage: u64,
pub vcn_usage: u64,
pub vpe_usage: u64,
pub compute_usage: u64,
}
// needs previous state for usage calculation
static PROC_DATA: LazyLock<Mutex<HashMap<PathBuf, HashMap<u32, AmdGpuProc>>>> =
LazyLock::new(|| Mutex::new(HashMap::new()));
fn get_amd_devs() -> Option<Vec<PathBuf>> {
let mut devices = Vec::new();
// read all PCI devices controlled by the AMDGPU module
let Ok(paths) = fs::read_dir("/sys/module/amdgpu/drivers/pci:amdgpu") else {
return None;
};
for path in paths {
let Ok(path) = path else { continue };
// test if it has a valid vendor path
let device_path = path.path();
if !device_path.is_dir() {
continue;
}
// Skip if asleep to avoid wakeups.
if !is_device_awake(&device_path) {
continue;
}
// This will exist for GPUs but not others, this is how we find their kernel
// name.
let test_path = device_path.join("drm");
if test_path.as_path().exists() {
devices.push(device_path);
}
}
if devices.is_empty() {
None
} else {
Some(devices)
}
}
pub fn get_amd_name(device_path: &Path) -> Option<String> {
// get revision and device ids from sysfs
let rev_path = device_path.join("revision");
let dev_path = device_path.join("device");
if !rev_path.exists() || !dev_path.exists() {
return None;
}
// read and remove newlines, 0x0 suffix.
let mut rev_data = read_to_string(rev_path).unwrap_or("0x00".to_string());
let mut dev_data = read_to_string(dev_path).unwrap_or("0x0000".to_string());
rev_data = rev_data.trim_end().to_string();
dev_data = dev_data.trim_end().to_string();
if rev_data.starts_with("0x") {
rev_data = rev_data.strip_prefix("0x").unwrap().to_string();
}
if dev_data.starts_with("0x") {
dev_data = dev_data.strip_prefix("0x").unwrap().to_string();
}
let revision_id = u32::from_str_radix(&rev_data, 16).unwrap_or(0);
let device_id = u32::from_str_radix(&dev_data, 16).unwrap_or(0);
if device_id == 0 {
return None;
}
// if it exists in our local database, use that name
amd_gpu_marketing::AMD_GPU_MARKETING_NAME
.iter()
.find(|(did, rid, _)| (did, rid) == (&device_id, &revision_id))
.map(|tuple| tuple.2.to_string())
}
fn get_amd_vram(device_path: &Path) -> Option<AmdGpuMemory> {
// get vram memory info from sysfs
let vram_total_path = device_path.join("mem_info_vram_total");
let vram_used_path = device_path.join("mem_info_vram_used");
let Ok(mut vram_total_data) = read_to_string(vram_total_path) else {
return None;
};
let Ok(mut vram_used_data) = read_to_string(vram_used_path) else {
return None;
};
// read and remove newlines
vram_total_data = vram_total_data.trim_end().to_string();
vram_used_data = vram_used_data.trim_end().to_string();
let Ok(vram_total) = vram_total_data.parse::<u64>() else {
return None;
};
let Ok(vram_used) = vram_used_data.parse::<u64>() else {
return None;
};
Some(AmdGpuMemory {
total: vram_total,
used: vram_used,
})
}
// from amdgpu_top: https://github.com/Umio-Yasuno/amdgpu_top/blob/c961cf6625c4b6d63fda7f03348323048563c584/crates/libamdgpu_top/src/stat/fdinfo/proc_info.rs#L114
fn diff_usage(pre: u64, cur: u64, interval: &Duration) -> u64 {
use std::ops::Mul;
let diff_ns = if pre == 0 || cur < pre {
return 0;
} else {
cur.saturating_sub(pre) as u128
};
diff_ns
.mul(100)
.checked_div(interval.as_nanos())
.unwrap_or(0) as u64
}
// from amdgpu_top: https://github.com/Umio-Yasuno/amdgpu_top/blob/c961cf6625c4b6d63fda7f03348323048563c584/crates/libamdgpu_top/src/stat/fdinfo/proc_info.rs#L13-L27
fn get_amdgpu_pid_fds(pid: u32, device_path: Vec<PathBuf>) -> Option<Vec<u32>> {
let Ok(fd_list) = fs::read_dir(format!("/proc/{pid}/fd/")) else {
return None;
};
let valid_fds: Vec<u32> = fd_list
.filter_map(|fd_link| {
let dir_entry = fd_link.map(|fd_link| fd_link.path()).ok()?;
let link = fs::read_link(&dir_entry).ok()?;
// e.g. "/dev/dri/renderD128" or "/dev/dri/card0"
if device_path.iter().any(|path| link.starts_with(path)) {
dir_entry.file_name()?.to_str()?.parse::<u32>().ok()
} else {
None
}
})
.collect();
if valid_fds.is_empty() {
None
} else {
Some(valid_fds)
}
}
fn get_amdgpu_drm(device_path: &Path) -> Option<Vec<PathBuf>> {
let mut drm_devices = Vec::new();
let drm_root = device_path.join("drm");
let Ok(drm_paths) = fs::read_dir(drm_root) else {
return None;
};
for drm_dir in drm_paths {
let Ok(drm_dir) = drm_dir else {
continue;
};
// attempt to get the device renderer name
let drm_name = drm_dir.file_name();
let Some(drm_name) = drm_name.to_str() else {
continue;
};
// construct driver device path if valid
if !drm_name.starts_with("card") && !drm_name.starts_with("render") {
continue;
}
drm_devices.push(PathBuf::from(format!("/dev/dri/{drm_name}")));
}
if drm_devices.is_empty() {
None
} else {
Some(drm_devices)
}
}
fn get_amd_fdinfo(device_path: &Path) -> Option<HashMap<u32, AmdGpuProc>> {
let mut fdinfo = HashMap::new();
let drm_paths = get_amdgpu_drm(device_path)?;
let Ok(proc_dir) = fs::read_dir("/proc") else {
return None;
};
let pids: Vec<u32> = proc_dir
.filter_map(|dir_entry| {
// check if pid is valid
let dir_entry = dir_entry.ok()?;
let metadata = dir_entry.metadata().ok()?;
if !metadata.is_dir() {
return None;
}
let pid = dir_entry.file_name().to_str()?.parse::<u32>().ok()?;
// skip init process
if pid == 1 {
return None;
}
Some(pid)
})
.collect();
for pid in pids {
// collect file descriptors that point to our device renderers
let Some(fds) = get_amdgpu_pid_fds(pid, drm_paths.clone()) else {
continue;
};
let mut usage: AmdGpuProc = Default::default();
let mut observed_ids: HashSet<usize> = HashSet::new();
for fd in fds {
let fdinfo_path = format!("/proc/{pid}/fdinfo/{fd}");
let Ok(fdinfo_data) = read_to_string(fdinfo_path) else {
continue;
};
let mut fdinfo_lines = fdinfo_data
.lines()
.skip_while(|l| !l.starts_with("drm-client-id"));
if let Some(id) = fdinfo_lines.next().and_then(|fdinfo_line| {
const LEN: usize = "drm-client-id:\t".len();
fdinfo_line.get(LEN..)?.parse().ok()
}) {
if !observed_ids.insert(id) {
continue;
}
} else {
continue;
}
for fdinfo_line in fdinfo_lines {
let Some(fdinfo_separator_index) = fdinfo_line.find(':') else {
continue;
};
let (fdinfo_keyword, mut fdinfo_value) =
fdinfo_line.split_at(fdinfo_separator_index);
fdinfo_value = &fdinfo_value[1..];
fdinfo_value = fdinfo_value.trim();
if let Some(fdinfo_value_space_index) = fdinfo_value.find(' ') {
fdinfo_value = &fdinfo_value[..fdinfo_value_space_index];
};
let Ok(fdinfo_value_num) = fdinfo_value.parse::<u64>() else {
continue;
};
match fdinfo_keyword {
"drm-engine-gfx" => usage.gfx_usage += fdinfo_value_num,
"drm-engine-dma" => usage.dma_usage += fdinfo_value_num,
"drm-engine-dec" => usage.dec_usage += fdinfo_value_num,
"drm-engine-enc" => usage.enc_usage += fdinfo_value_num,
"drm-engine-enc_1" => usage.uvd_usage += fdinfo_value_num,
"drm-engine-jpeg" => usage.vcn_usage += fdinfo_value_num,
"drm-engine-vpe" => usage.vpe_usage += fdinfo_value_num,
"drm-engine-compute" => usage.compute_usage += fdinfo_value_num,
"drm-memory-vram" => usage.vram_usage += fdinfo_value_num << 10, // KiB -> B
_ => {}
};
}
}
if usage != Default::default() {
fdinfo.insert(pid, usage);
}
}
Some(fdinfo)
}
pub fn get_amd_vecs(widgets_to_harvest: &UsedWidgets, prev_time: Instant) -> Option<AmdGpuData> {
let device_path_list = get_amd_devs()?;
let interval = Instant::now().duration_since(prev_time);
let num_gpu = device_path_list.len();
let mut mem_vec = Vec::with_capacity(num_gpu);
let mut proc_vec = Vec::with_capacity(num_gpu);
let mut total_mem = 0;
for device_path in device_path_list {
let device_name = get_amd_name(&device_path)
.unwrap_or(amd_gpu_marketing::AMDGPU_DEFAULT_NAME.to_string());
if let Some(mem) = get_amd_vram(&device_path) {
if widgets_to_harvest.use_mem {
if let Some(total_bytes) = NonZeroU64::new(mem.total) {
mem_vec.push((
device_name.clone(),
MemData {
total_bytes,
used_bytes: mem.used,
},
));
}
}
total_mem += mem.total
}
if widgets_to_harvest.use_proc {
if let Some(procs) = get_amd_fdinfo(&device_path) {
let mut proc_info = PROC_DATA.lock().unwrap();
let _ = proc_info.try_insert(device_path.clone(), HashMap::new());
let prev_fdinfo = proc_info.get_mut(&device_path).unwrap();
let mut procs_map = HashMap::new();
for (proc_pid, proc_usage) in procs {
if let Some(prev_usage) = prev_fdinfo.get_mut(&proc_pid) {
// calculate deltas
let gfx_usage =
diff_usage(prev_usage.gfx_usage, proc_usage.gfx_usage, &interval);
let dma_usage =
diff_usage(prev_usage.dma_usage, proc_usage.dma_usage, &interval);
let enc_usage =
diff_usage(prev_usage.enc_usage, proc_usage.enc_usage, &interval);
let dec_usage =
diff_usage(prev_usage.dec_usage, proc_usage.dec_usage, &interval);
let uvd_usage =
diff_usage(prev_usage.uvd_usage, proc_usage.uvd_usage, &interval);
let vcn_usage =
diff_usage(prev_usage.vcn_usage, proc_usage.vcn_usage, &interval);
let vpe_usage =
diff_usage(prev_usage.vpe_usage, proc_usage.vpe_usage, &interval);
// combined usage
let gpu_util_wide = gfx_usage
+ dma_usage
+ enc_usage
+ dec_usage
+ uvd_usage
+ vcn_usage
+ vpe_usage;
let gpu_util: u32 = gpu_util_wide.try_into().unwrap_or(0);
if gpu_util > 0 || proc_usage.vram_usage > 0 {
procs_map.insert(proc_pid, (proc_usage.vram_usage, gpu_util));
}
*prev_usage = proc_usage;
} else {
prev_fdinfo.insert(proc_pid, proc_usage);
}
}
if !procs_map.is_empty() {
proc_vec.push(procs_map);
}
}
}
}
Some(AmdGpuData {
memory: (!mem_vec.is_empty()).then_some(mem_vec),
procs: (!proc_vec.is_empty()).then_some((total_mem, proc_vec)),
})
}

View File

@ -0,0 +1,667 @@
// from https://github.com/GPUOpen-Tools/device_info/blob/master/DeviceInfo.cpp
pub const AMDGPU_DEFAULT_NAME: &str = "AMD Radeon Graphics";
pub const AMD_GPU_MARKETING_NAME: &[(u32, u32, &str)] = &[
(0x6798, 0x00, "AMD Radeon R9 200 / HD 7900"),
(0x6799, 0x00, "AMD Radeon HD 7900"),
(0x679A, 0x00, "AMD Radeon HD 7900"),
(0x679B, 0x00, "AMD Radeon HD 7900"),
(0x679E, 0x00, "AMD Radeon HD 7800"),
(0x6780, 0x00, "AMD FirePro W9000"),
(0x6784, 0x00, "ATI FirePro V"),
(0x6788, 0x00, "ATI FirePro V"),
(0x678A, 0x00, "AMD FirePro W8000"),
(0x6818, 0x00, "AMD Radeon HD 7800"),
(0x6819, 0x00, "AMD Radeon HD 7800"),
(0x6808, 0x00, "AMD FirePro W7000"),
(0x6809, 0x00, "ATI FirePro W5000"),
(0x684C, 0x00, "ATI FirePro V"),
(0x6800, 0x00, "AMD Radeon HD 7970M"),
(0x6801, 0x00, "AMD Radeon HD8970M"),
(0x6806, 0x00, "AMD Radeon R9 M290X"),
(0x6810, 0x00, "AMD Radeon R9 200"),
(0x6810, 0x81, "AMD Radeon R9 370"),
(0x6811, 0x00, "AMD Radeon R9 200"),
(0x6811, 0x81, "AMD Radeon R7 370"),
(0x6820, 0x00, "AMD Radeon R9 M275X"),
(0x6820, 0x81, "AMD Radeon R9 M375"),
(0x6820, 0x83, "AMD Radeon R9 M375X"),
(0x6821, 0x00, "AMD Radeon R9 M200X"),
(0x6821, 0x83, "AMD Radeon R9 M370X"),
(0x6821, 0x87, "AMD Radeon R7 M380"),
(0x6822, 0x00, "AMD Radeon E8860"),
(0x6823, 0x00, "AMD Radeon R9 M200X"),
(0x6825, 0x00, "AMD Radeon HD 7800M"),
(0x6826, 0x00, "AMD Radeon HD 7700M"),
(0x6827, 0x00, "AMD Radeon HD 7800M"),
(0x682B, 0x00, "AMD Radeon HD 8800M"),
(0x682B, 0x87, "AMD Radeon R9 M360"),
(0x682D, 0x00, "AMD Radeon HD 7700M"),
(0x682F, 0x00, "AMD Radeon HD 7700M"),
(0x6828, 0x00, "AMD FirePro W600"),
(0x682C, 0x00, "AMD FirePro W4100"),
(0x6830, 0x00, "AMD Radeon 7800M"),
(0x6831, 0x00, "AMD Radeon 7700M"),
(0x6835, 0x00, "AMD Radeon R7 Series / HD 9000"),
(0x6837, 0x00, "AMD Radeon HD 7700"),
(0x683D, 0x00, "AMD Radeon HD 7700"),
(0x683F, 0x00, "AMD Radeon HD 7700"),
(0x6608, 0x00, "AMD FirePro W2100"),
(0x6610, 0x00, "AMD Radeon R7 200"),
(0x6610, 0x81, "AMD Radeon R7 350"),
(0x6610, 0x83, "AMD Radeon R5 340"),
(0x6610, 0x87, "AMD Radeon R7 200"),
(0x6611, 0x00, "AMD Radeon R7 200"),
(0x6611, 0x87, "AMD Radeon R7 200"),
(0x6613, 0x00, "AMD Radeon R7 200"),
(0x6617, 0x00, "AMD Radeon R7 240"),
(0x6617, 0x87, "AMD Radeon R7 200"),
(0x6617, 0xC7, "AMD Radeon R7 240"),
(0x6600, 0x00, "AMD Radeon HD 8600/8700M"),
(0x6600, 0x81, "AMD Radeon R7 M370"),
(0x6601, 0x00, "AMD Radeon HD 8500M/8700M"),
(0x6604, 0x00, "AMD Radeon R7 M265"),
(0x6604, 0x81, "AMD Radeon R7 M350"),
(0x6605, 0x00, "AMD Radeon R7 M260"),
(0x6605, 0x81, "AMD Radeon R7 M340"),
(0x6606, 0x00, "AMD Radeon HD 8790M"),
(0x6607, 0x00, "AMD Radeon R5 M240"),
(0x6660, 0x00, "AMD Radeon HD 8600M"),
(0x6660, 0x81, "AMD Radeon R5 M335"),
(0x6660, 0x83, "AMD Radeon R5 M330"),
(0x6663, 0x00, "AMD Radeon HD 8500M"),
(0x6663, 0x83, "AMD Radeon R5 M320"),
(0x6664, 0x00, "AMD Radeon R5 M200"),
(0x6665, 0x00, "AMD Radeon R5 M230"),
(0x6665, 0x83, "AMD Radeon R5 M320"),
(0x6665, 0xC3, "AMD Radeon R5 M435"),
(0x6666, 0x00, "AMD Radeon R5 M200"),
(0x6667, 0x00, "AMD Radeon R5 M200"),
(0x666F, 0x00, "AMD Radeon HD 8500M"),
(0x6649, 0x00, "AMD FirePro W5100"),
(0x6658, 0x00, "AMD Radeon R7 200"),
(0x665C, 0x00, "AMD Radeon HD 7700"),
(0x665D, 0x00, "AMD Radeon R7 200"),
(0x665F, 0x81, "AMD Radeon R7 360"),
(0x665F, 0x81, "AMD Radeon R7 360"),
(0x6640, 0x00, "AMD Radeon HD 8950"),
(0x6640, 0x80, "AMD Radeon R9 M380"),
(0x6646, 0x00, "AMD Radeon R9 M280X"),
(0x6646, 0x80, "AMD Radeon R9 M385"),
(0x6647, 0x00, "AMD Radeon R9 M200X"),
(0x6647, 0x80, "AMD Radeon R9 M380"),
(0x67A0, 0x00, "AMD FirePro W9100"),
(0x67A1, 0x00, "AMD FirePro W8100"),
(0x67B0, 0x00, "AMD Radeon R9 200"),
(0x67B0, 0x80, "AMD Radeon R9 390"),
(0x67B1, 0x00, "AMD Radeon R9 200"),
(0x67B1, 0x80, "AMD Radeon R9 390"),
(0x67B9, 0x00, "AMD Radeon R9 200"),
(0x1309, 0x00, "AMD Radeon R7"),
(0x130A, 0x00, "AMD Radeon R6"),
(0x130C, 0x00, "AMD Radeon R7"),
(0x130D, 0x00, "AMD Radeon R6"),
(0x130E, 0x00, "AMD Radeon R5"),
(0x130F, 0x00, "AMD Radeon R7"),
(0x130F, 0xD4, "AMD Radeon R7"),
(0x130F, 0xD5, "AMD Radeon R7"),
(0x130F, 0xD6, "AMD Radeon R7"),
(0x130F, 0xD7, "AMD Radeon R7"),
(0x1313, 0x00, "AMD Radeon R7"),
(0x1313, 0xD4, "AMD Radeon R7"),
(0x1313, 0xD5, "AMD Radeon R7"),
(0x1313, 0xD6, "AMD Radeon R7"),
(0x1315, 0x00, "AMD Radeon R5"),
(0x1315, 0xD4, "AMD Radeon R5"),
(0x1315, 0xD5, "AMD Radeon R5"),
(0x1315, 0xD6, "AMD Radeon R5"),
(0x1315, 0xD7, "AMD Radeon R5"),
(0x1318, 0x00, "AMD Radeon R5"),
(0x131C, 0x00, "AMD Radeon R7"),
(0x131D, 0x00, "AMD Radeon R6"),
(0x130B, 0x00, "AMD Radeon R4"),
(0x1316, 0x00, "AMD Radeon R5"),
(0x131B, 0x00, "AMD Radeon R4"),
(0x9830, 0x00, "AMD Radeon HD 8400 / R3"),
(0x9831, 0x00, "AMD Radeon HD 8400E"),
(0x9832, 0x00, "AMD Radeon HD 8330"),
(0x9833, 0x00, "AMD Radeon HD 8330E"),
(0x9834, 0x00, "AMD Radeon HD 8210"),
(0x9835, 0x00, "AMD Radeon HD 8210E"),
(0x9836, 0x00, "AMD Radeon HD 8200 / R3"),
(0x9837, 0x00, "AMD Radeon HD 8280E"),
(0x9838, 0x00, "AMD Radeon HD 8200 / R3"),
(0x9839, 0x00, "AMD Radeon HD 8180"),
(0x983D, 0x00, "AMD Radeon HD 8250"),
(0x9850, 0x00, "AMD Radeon R3"),
(0x9850, 0x03, "AMD Radeon R3"),
(0x9850, 0x40, "AMD Radeon R2"),
(0x9850, 0x45, "AMD Radeon R3"),
(0x9851, 0x00, "AMD Radeon R4"),
(0x9851, 0x01, "AMD Radeon R5E"),
(0x9851, 0x05, "AMD Radeon R5"),
(0x9851, 0x06, "AMD Radeon R5E"),
(0x9851, 0x40, "AMD Radeon R4"),
(0x9851, 0x45, "AMD Radeon R5"),
(0x9852, 0x00, "AMD Radeon R2"),
(0x9852, 0x40, "AMD Radeon E1"),
(0x9853, 0x00, "AMD Radeon R2"),
(0x9853, 0x01, "AMD Radeon R4E"),
(0x9853, 0x03, "AMD Radeon R2"),
(0x9853, 0x05, "AMD Radeon R1E"),
(0x9853, 0x06, "AMD Radeon R1E"),
(0x9853, 0x40, "AMD Radeon R2"),
(0x9853, 0x07, "AMD Radeon R1E"),
(0x9853, 0x08, "AMD Radeon R1E"),
(0x9854, 0x00, "AMD Radeon R3"),
(0x9854, 0x01, "AMD Radeon R3E"),
(0x9854, 0x02, "AMD Radeon R3"),
(0x9854, 0x05, "AMD Radeon R2"),
(0x9854, 0x06, "AMD Radeon R4"),
(0x9854, 0x07, "AMD Radeon R3"),
(0x9855, 0x02, "AMD Radeon R6"),
(0x9855, 0x05, "AMD Radeon R4"),
(0x9856, 0x07, "AMD Radeon R1E"),
(0x9856, 0x00, "AMD Radeon R2"),
(0x9856, 0x01, "AMD Radeon R2E"),
(0x9856, 0x02, "AMD Radeon R2"),
(0x9856, 0x05, "AMD Radeon R1E"),
(0x9856, 0x06, "AMD Radeon R2"),
(0x9856, 0x07, "AMD Radeon R1E"),
(0x9856, 0x08, "AMD Radeon R1E"),
(0x9856, 0x13, "AMD Radeon R1E"),
(0x6900, 0x00, "AMD Radeon R7 M260"),
(0x6900, 0x81, "AMD Radeon R7 M360"),
(0x6900, 0x83, "AMD Radeon R7 M340"),
(0x6900, 0xC1, "AMD Radeon R5 M465"),
(0x6900, 0xC3, "AMD Radeon R5 M445"),
(0x6900, 0xD1, "AMD Radeon 530"),
(0x6900, 0xD3, "AMD Radeon 530"),
(0x6901, 0x00, "AMD Radeon R5 M255"),
(0x6902, 0x00, "AMD Radeon"),
(0x6907, 0x00, "AMD Radeon R5 M255"),
(0x6907, 0x87, "AMD Radeon R5 M315"),
(0x6920, 0x00, "AMD Radeon R9 M395X"),
(0x6920, 0x01, "AMD Radeon R9 M390X"),
(0x6921, 0x00, "AMD Radeon R9 M390X"),
(0x6929, 0x00, "AMD FirePro S7150"),
(0x6929, 0x01, "AMD FirePro S7100X"),
(0x692B, 0x00, "AMD FirePro W7100"),
(0x692F, 0x00, "AMD MxGPU"),
(0x692F, 0x01, "AMD MxGPU"),
(0x6930, 0xF0, "AMD MxGPU"),
(0x6938, 0x00, "AMD Radeon R9 200"),
(0x6938, 0xF1, "AMD Radeon R9 380"),
(0x6938, 0xF0, "AMD Radeon R9 200"),
(0x6939, 0x00, "AMD Radeon R9 200"),
(0x6939, 0xF0, "AMD Radeon R9 200"),
(0x6939, 0xF1, "AMD Radeon R9 380"),
(0x9874, 0xC4, "AMD Radeon R7"),
(0x9874, 0xC5, "AMD Radeon R6"),
(0x9874, 0xC6, "AMD Radeon R6"),
(0x9874, 0xC7, "AMD Radeon R5"),
(0x9874, 0x81, "AMD Radeon R6"),
(0x9874, 0x84, "AMD Radeon R7"),
(0x9874, 0x85, "AMD Radeon R6"),
(0x9874, 0x87, "AMD Radeon R5"),
(0x9874, 0x88, "AMD Radeon R7E"),
(0x9874, 0x89, "AMD Radeon R6E"),
(0x9874, 0xC8, "AMD Radeon R7"),
(0x9874, 0xC9, "AMD Radeon R7"),
(0x9874, 0xCA, "AMD Radeon R5"),
(0x9874, 0xCB, "AMD Radeon R5"),
(0x9874, 0xCC, "AMD Radeon R7"),
(0x9874, 0xCD, "AMD Radeon R7"),
(0x9874, 0xCE, "AMD Radeon R5"),
(0x9874, 0xE1, "AMD Radeon R7"),
(0x9874, 0xE2, "AMD Radeon R7"),
(0x9874, 0xE3, "AMD Radeon R7"),
(0x9874, 0xE4, "AMD Radeon R7"),
(0x9874, 0xE5, "AMD Radeon R5"),
(0x9874, 0xE6, "AMD Radeon R5"),
(0x7300, 0xC1, "AMD FirePro S9300 x2"),
(0x7300, 0xC8, "AMD Radeon R9 Fury"),
(0x7300, 0xC9, "AMD Radeon Pro Duo"),
(0x7300, 0xCA, "AMD Radeon R9 Fury"),
(0x7300, 0xCB, "AMD Radeon R9 Fury"),
(0x730F, 0xC9, "AMD MxGPU"),
(0x98E4, 0x80, "AMD Radeon R5E"),
(0x98E4, 0x81, "AMD Radeon R4E"),
(0x98E4, 0x83, "AMD Radeon R2E"),
(0x98E4, 0x84, "AMD Radeon R2E"),
(0x98E4, 0x86, "AMD Radeon R1E"),
(0x98E4, 0xC0, "AMD Radeon R4"),
(0x98E4, 0xC1, "AMD Radeon R5"),
(0x98E4, 0xC2, "AMD Radeon R4"),
(0x98E4, 0xC4, "AMD Radeon R5"),
(0x98E4, 0xC6, "AMD Radeon R5"),
(0x98E4, 0xC8, "AMD Radeon R4"),
(0x98E4, 0xC9, "AMD Radeon R4"),
(0x98E4, 0xCA, "AMD Radeon R5"),
(0x98E4, 0xD0, "AMD Radeon R2"),
(0x98E4, 0xD1, "AMD Radeon R2"),
(0x98E4, 0xD2, "AMD Radeon R2"),
(0x98E4, 0xD4, "AMD Radeon R2"),
(0x98E4, 0xD9, "AMD Radeon R5"),
(0x98E4, 0xDA, "AMD Radeon R5"),
(0x98E4, 0xDB, "AMD Radeon R3"),
(0x98E4, 0xE1, "AMD Radeon R3"),
(0x98E4, 0xE2, "AMD Radeon R3"),
(0x98E4, 0xE9, "AMD Radeon R4"),
(0x98E4, 0xEA, "AMD Radeon R4"),
(0x98E4, 0xEB, "AMD Radeon R4"),
(0x98E4, 0xEB, "AMD Radeon R3"),
(0x67C0, 0x00, "AMD Radeon Pro WX 7100"),
(0x67C0, 0x80, "AMD Radeon E9550"),
(0x67C2, 0x01, "AMD Radeon Pro V7350x2"),
(0x67C2, 0x02, "AMD Radeon Pro V7300X"),
(0x67C4, 0x00, "AMD Radeon Pro WX 7100"),
(0x67C4, 0x80, "AMD Radeon Embedded E9560"),
(0x67C7, 0x00, "AMD Radeon Pro WX 5100"),
(0x67C7, 0x80, "AMD Radeon Embedded E9390"),
(0x67D0, 0x01, "AMD Radeon Pro V7350x2"),
(0x67FF, 0xE3, "AMD Radeon E9550"),
(0x67FF, 0xF3, "AMD Radeon Pro E9565"),
(0x67FF, 0xF7, "AMD Radeon Pro WX 5100"),
(0x67D0, 0x02, "AMD Radeon Pro V7300X"),
(0x67DF, 0xC4, "AMD Radeon RX 480"),
(0x67DF, 0xC5, "AMD Radeon RX 470"),
(0x67DF, 0xC7, "AMD Radeon RX 480"),
(0x67DF, 0xCF, "AMD Radeon RX 470"),
(0x67DF, 0xFF, "AMD Radeon RX 470"),
(0x67FF, 0xE7, "AMD Radeon Embedded E9390"),
(0x67DF, 0xC0, "AMD Radeon Pro 580X"),
(0x67DF, 0xC1, "AMD Radeon RX 580"),
(0x67DF, 0xC2, "AMD Radeon RX 570"),
(0x67DF, 0xC3, "AMD Radeon RX 580"),
(0x67DF, 0xC6, "AMD Radeon RX 570"),
(0x67DF, 0xC7, "AMD Radeon RX 480"),
(0x67DF, 0xCF, "AMD Radeon RX 470"),
(0x67DF, 0xD7, "AMD Radeon RX 470"),
(0x67DF, 0xE0, "AMD Radeon RX 470"),
(0x67DF, 0xE1, "AMD Radeon RX 590"),
(0x67DF, 0xE3, "AMD Radeon RX"),
(0x67DF, 0xE7, "AMD Radeon RX 580"),
(0x67DF, 0xEB, "AMD Radeon Pro 580X"),
(0x67DF, 0xEF, "AMD Radeon RX 570"),
(0x67DF, 0xF7, "AMD P30PH"),
(0x67DF, 0xFF, "AMD Radeon RX 470"),
(0x6FDF, 0xEF, "AMD Radeon RX 580 2048SP"),
(0x67E0, 0x00, "AMD Radeon Pro WX"),
(0x67E3, 0x00, "AMD Radeon Pro WX 4100"),
(0x67E8, 0x00, "AMD Radeon Pro WX"),
(0x67E8, 0x01, "AMD Radeon Pro WX"),
(0x67E8, 0x80, "AMD Radeon E9260"),
(0x67EB, 0x00, "AMD Radeon Pro V5300X"),
(0x67EF, 0xC0, "AMD Radeon RX 560"),
(0x67EF, 0xC1, "AMD Radeon RX 560"),
(0x67EF, 0xC5, "AMD Radeon RX 560"),
(0x67EF, 0xC7, "AMD Radeon 550"),
(0x67EF, 0xCF, "AMD Radeon RX 460"),
(0x67EF, 0xEF, "AMD Radeon 550"),
(0x67FF, 0xC0, "AMD Radeon Pro 465"),
(0x67FF, 0xC1, "AMD Radeon RX 560"),
(0x67EF, 0xC2, "AMD Radeon Pro"),
(0x67EF, 0xE3, "AMD Radeon Pro"),
(0x67EF, 0xE5, "AMD Radeon RX 560"),
(0x67EF, 0xE7, "AMD Radeon RX 560"),
(0x67EF, 0xE0, "AMD Radeon RX 560"),
(0x67EF, 0xFF, "AMD Radeon RX 460"),
(0x67FF, 0xCF, "AMD Radeon RX 560"),
(0x67FF, 0xEF, "AMD Radeon RX 560"),
(0x67FF, 0xFF, "AMD Radeon RX550/550"),
(0x6980, 0x00, "AMD Radeon Pro WX 3100"),
(0x6981, 0x00, "AMD Radeon Pro WX 3200"),
(0x6981, 0x01, "AMD Radeon Pro WX 3200"),
(0x6981, 0x10, "AMD Radeon Pro WX 3200"),
(0x6985, 0x00, "AMD Radeon Pro WX 3100"),
(0x6986, 0x00, "AMD Radeon Pro WX 2100"),
(0x6987, 0x80, "AMD Embedded Radeon E9171"),
(0x6987, 0xC0, "AMD Radeon 550X"),
(0x6987, 0xC1, "AMD Radeon RX 640"),
(0x6987, 0xC3, "AMD Radeon 540X"),
(0x6987, 0xC7, "AMD Radeon 540"),
(0x6995, 0x00, "AMD Radeon Pro WX 2100"),
(0x6997, 0x00, "AMD Radeon Pro WX 2100"),
(0x699F, 0x81, "AMD Embedded Radeon E9170"),
(0x699F, 0xC0, "AMD Radeon 500"),
(0x699F, 0xC1, "AMD Radeon 540"),
(0x699F, 0xC3, "AMD Radeon 500"),
(0x699F, 0xC7, "AMD Radeon RX550/550"),
(0x699F, 0xC9, "AMD Radeon 540"),
(0x694C, 0xC0, "AMD Radeon RX Vega M GH"),
(0x694E, 0xC0, "AMD Radeon RX Vega M GL"),
(0x6860, 0x00, "AMD Radeon Instinct MI25"),
(0x6860, 0x01, "AMD Radeon Instinct MI25"),
(0x6860, 0x02, "AMD Radeon Instinct MI25"),
(0x6860, 0x03, "AMD Radeon Pro V340"),
(0x6860, 0x04, "AMD Radeon Instinct MI25x2"),
(0x6860, 0x06, "AMD Radeon Instinct MI25"),
(0x6860, 0x07, "AMD Radeon Pro V320"),
(0x6861, 0x00, "AMD Radeon Pro WX 9100"),
(0x6862, 0x00, "AMD Radeon Pro SSG"),
(0x6863, 0x00, "AMD Radeon Vega Frontier Edition"),
(0x6864, 0x03, "AMD Radeon Pro V340"),
(0x6864, 0x04, "AMD Instinct MI25x2"),
(0x6864, 0x05, "AMD Radeon Pro V340"),
(0x6867, 0x00, "AMD Radeon Pro Vega 56"),
(0x6868, 0x00, "AMD Radeon Pro WX 8200"),
(0x686C, 0x00, "AMD Radeon Instinct MI25 MxGPU"),
(0x686C, 0x01, "AMD Radeon Instinct MI25 MxGPU"),
(0x686C, 0x02, "AMD Radeon Instinct MI25 MxGPU"),
(0x686C, 0x03, "AMD Radeon Pro V340 MxGPU"),
(0x686C, 0x04, "AMD Radeon Instinct MI25x2 MxGPU"),
(0x686C, 0x05, "AMD Radeon Pro V340 MxGPU"),
(0x686C, 0x06, "AMD Radeon Instinct MI25 MxGPU"),
(0x687F, 0x01, "AMD Radeon RX Vega"),
(0x687F, 0xC0, "AMD Radeon RX Vega"),
(0x687F, 0xC1, "AMD Radeon RX Vega"),
(0x687F, 0xC3, "AMD Radeon RX Vega"),
(0x687F, 0xC7, "AMD Radeon RX Vega"),
(0x15DD, 0x00, "AMD 15DD"),
(0x15DD, 0x81, "AMD Radeon Vega 11"),
(0x15DD, 0x82, "AMD Radeon Vega 8"),
(0x15DD, 0x83, "AMD Radeon Vega 8"),
(0x15DD, 0x84, "AMD Radeon Vega 6"),
(0x15DD, 0x85, "AMD Radeon Vega 3"),
(0x15DD, 0x86, "AMD Radeon Vega 11"),
(0x15DD, 0x87, "AMD 15DD"),
(0x15DD, 0x88, "AMD Radeon Vega 8"),
(0x15DD, 0xC1, "AMD Radeon RX Vega 11"),
(0x15DD, 0xC2, "AMD Radeon Vega 8"),
(0x15DD, 0xC3, "AMD Radeon RX Vega 10"),
(0x15DD, 0xC4, "AMD Radeon Vega 8"),
(0x15DD, 0xC5, "AMD Radeon Vega 3"),
(0x15DD, 0xC6, "AMD Radeon RX Vega 11"),
(0x15DD, 0xC7, "AMD 15DD"),
(0x15DD, 0xC8, "AMD Radeon Vega 8"),
(0x15DD, 0xC9, "AMD Radeon RX Vega 11"),
(0x15DD, 0xCA, "AMD Radeon Vega 8"),
(0x15DD, 0xCB, "AMD Radeon Vega 3"),
(0x15DD, 0xCC, "AMD Radeon Vega 6"),
(0x15DD, 0xCD, "AMD 15DD"),
(0x15DD, 0xCE, "AMD Radeon Vega 3"),
(0x15DD, 0xCF, "AMD Radeon Vega 3"),
(0x15DD, 0xD0, "AMD Radeon Vega 10"),
(0x15DD, 0xD1, "AMD Radeon Vega 8"),
(0x15DD, 0xD2, "AMD 15DD"),
(0x15DD, 0xD3, "AMD Radeon Vega 11"),
(0x15DD, 0xD4, "AMD 15DD"),
(0x15DD, 0xD5, "AMD Radeon Vega 8"),
(0x15DD, 0xD6, "AMD Radeon Vega 11"),
(0x15DD, 0xD7, "AMD Radeon Vega 8"),
(0x15DD, 0xD8, "AMD Radeon Vega 3"),
(0x15DD, 0xD9, "AMD Radeon Vega 6"),
(0x15DD, 0xE1, "AMD Radeon Vega 3"),
(0x15DD, 0xE2, "AMD Radeon Vega 3"),
(0x15D8, 0x00, "AMD Radeon RX Vega 8 WS"),
(0x15D8, 0x91, "AMD Radeon Vega 3"),
(0x15D8, 0x92, "AMD Radeon Vega 3"),
(0x15D8, 0x93, "AMD Radeon Vega 1"),
(0x15D8, 0xA1, "AMD Radeon RX Vega 10"),
(0x15D8, 0xA2, "AMD Radeon Vega 8"),
(0x15D8, 0xA3, "AMD Radeon Vega 6"),
(0x15D8, 0xA4, "AMD Radeon Vega 3"),
(0x15D8, 0xB1, "AMD Radeon Vega 10"),
(0x15D8, 0xB2, "AMD Radeon Vega 8"),
(0x15D8, 0xB3, "AMD Radeon Vega 6"),
(0x15D8, 0xB4, "AMD Radeon Vega 3"),
(0x15D8, 0xC1, "AMD Radeon RX Vega 10"),
(0x15D8, 0xC2, "AMD Radeon Vega 8"),
(0x15D8, 0xC3, "AMD Radeon Vega 6"),
(0x15D8, 0xC4, "AMD Radeon Vega 3"),
(0x15D8, 0xC5, "AMD Radeon Vega 3"),
(0x15D8, 0xC8, "AMD Radeon RX Vega 11"),
(0x15D8, 0xC9, "AMD Radeon Vega 8"),
(0x15D8, 0xCA, "AMD Radeon RX Vega 11"),
(0x15D8, 0xCB, "AMD Radeon Vega 8"),
(0x15D8, 0xCC, "AMD Radeon Vega 3"),
(0x15D8, 0xCE, "AMD Radeon Vega 3"),
(0x15D8, 0xCF, "AMD Radeon Vega 3"),
(0x15D8, 0xD1, "AMD Radeon Vega 10"),
(0x15D8, 0xD2, "AMD Radeon Vega 8"),
(0x15D8, 0xD3, "AMD Radeon Vega 6"),
(0x15D8, 0xD4, "AMD Radeon Vega 3"),
(0x15D8, 0xD8, "AMD Radeon Vega 11"),
(0x15D8, 0xD9, "AMD Radeon Vega 8"),
(0x15D8, 0xDA, "AMD Radeon Vega 11"),
(0x15D8, 0xDB, "AMD Radeon Vega 3"),
(0x15D8, 0xDC, "AMD Radeon Vega 3"),
(0x15D8, 0xDD, "AMD Radeon Vega 3"),
(0x15D8, 0xDE, "AMD Radeon Vega 3"),
(0x15D8, 0xDF, "AMD Radeon Vega 3"),
(0x15D8, 0xE1, "AMD Radeon RX Vega 11"),
(0x15D8, 0xE2, "AMD Radeon Vega 9"),
(0x15D8, 0xE3, "AMD Radeon Vega 3"),
(0x15D8, 0xE4, "AMD Radeon Vega 3"),
(0x69AF, 0xC0, "AMD Radeon Pro Vega 20"),
(0x69AF, 0xC7, "AMD Radeon Pro Vega 16"),
(0x69AF, 0xD7, "AMD Radeon RX Vega 16"),
(0x66AF, 0xC1, "AMD Radeon VII"),
(0x66A1, 0x06, "AMD Radeon Pro VII"),
(0x740C, 0x01, "AMD Instinct MI250X"),
(0x740F, 0x02, "AMD Instinct MI210"),
(0x74A1, 0x00, "AMD Instinct MI300X"),
(0x74A1, 0x01, "AMD Instinct MI300A"),
(0x7310, 0x00, "AMD Radeon Pro W5700X"),
(0x7312, 0x00, "AMD Radeon Pro W5700"),
(0x7319, 0x40, "AMD Radeon Pro 5700 XT"),
(0x731E, 0xC7, "AMD Radeon RX 5700B"),
(0x731F, 0xC0, "AMD Radeon RX 5700 XT 50th Anniversary"),
(0x731F, 0xC1, "AMD Radeon RX 5700 XT"),
(0x731F, 0xC2, "AMD Radeon RX 5600M"),
(0x731F, 0xC3, "AMD Radeon RX 5700M"),
(0x731F, 0xC4, "AMD Radeon RX 5700"),
(0x731F, 0xC5, "AMD Radeon RX 5700 XT"),
(0x731F, 0xCA, "AMD Radeon RX 5600 XT"),
(0x731F, 0xCB, "AMD Radeon RX 5600"),
(0x7360, 0x41, "AMD Radeon Pro 5600M"),
(0x7360, 0xC3, "AMD Radeon Pro V520"),
(0x7362, 0xC3, "AMD Radeon Pro V520 MxGPU"),
(0x7340, 0x00, "AMD Radeon Pro W5500X"),
(0x7340, 0x41, "AMD Radeon Pro 5500 XT"),
(0x7340, 0x47, "AMD Radeon Pro 5300"),
(0x7340, 0xC1, "AMD Radeon RX 5500M"),
(0x7340, 0xC3, "AMD Radeon RX 5300M"),
(0x7340, 0xC5, "AMD Radeon RX 5500 XT"),
(0x7340, 0xC7, "AMD Radeon RX 5500"),
(0x7340, 0xCF, "AMD Radeon RX 5300"),
(0x7341, 0x00, "AMD Radeon Pro W5500"),
(0x7347, 0x00, "AMD Radeon Pro W5500M"),
(0x734F, 0x00, "AMD Radeon Pro W5300M"),
(0x73A5, 0xC0, "AMD Radeon RX 6950 XT"),
(0x73AF, 0xC0, "AMD Radeon RX 6900 XT"),
(0x73BF, 0xC0, "AMD Radeon RX 6900 XT"),
(0x73BF, 0xC1, "AMD Radeon RX 6800 XT"),
(0x73BF, 0xC3, "AMD Radeon RX 6800"),
(0x73A1, 0x00, "AMD Radeon Pro V620"),
(0x73A3, 0x00, "AMD Radeon Pro W6800"),
(0x73DF, 0xC0, "AMD Radeon RX 6750 XT"),
(0x73DF, 0xC1, "AMD Radeon RX 6700 XT"),
(0x73DF, 0xC5, "AMD Radeon RX 6700 XT"),
(0x73DF, 0xDF, "AMD Radeon RX 6700"),
(0x73DF, 0xC2, "AMD Radeon RX 6800M"),
(0x73DF, 0xC3, "AMD Radeon RX 6800M"),
(0x73DF, 0xCF, "AMD Radeon RX 6700M"),
(0x73DF, 0xFF, "AMD Radeon RX 6700"),
(0x73EF, 0xC0, "AMD Radeon RX 6800S"),
(0x73EF, 0xC1, "AMD Radeon RX 6650 XT"),
(0x73EF, 0xC2, "AMD Radeon RX 6700S"),
(0x73EF, 0xC3, "AMD Radeon RX 6650M"),
(0x73EF, 0xC4, "AMD Radeon RX 6650M XT"),
(0x73FF, 0xC1, "AMD Radeon RX 6600 XT"),
(0x73FF, 0xC7, "AMD Radeon RX 6600"),
(0x73FF, 0xC3, "AMD Radeon RX 6600M"),
(0x73FF, 0xCB, "AMD Radeon RX 6600S"),
(0x73E1, 0x00, "AMD Radeon Pro W6600M"),
(0x73E3, 0x00, "AMD Radeon Pro W6600"),
(0x7422, 0x00, "AMD Radeon Pro W6400"),
(0x743F, 0xC1, "AMD Radeon RX 6500 XT"),
(0x743F, 0xC7, "AMD Radeon RX 6400"),
(0x743F, 0xD7, "AMD Radeon RX 6400"),
(0x7421, 0x00, "AMD Radeon Pro W6500M"),
(0x7423, 0x00, "AMD Radeon Pro W6300M"),
(0x7423, 0x01, "AMD Radeon Pro W6300"),
(0x743F, 0xC3, "AMD Radeon RX 6500M"),
(0x743F, 0xCF, "AMD Radeon RX 6300M"),
(0x743F, 0xC8, "AMD Radeon RX 6550M"),
(0x743F, 0xCC, "AMD Radeon 6550S"),
(0x743F, 0xCE, "AMD Radeon RX 6450M"),
(0x743F, 0xD3, "AMD Radeon RX 6550M"),
(0x744C, 0xC8, "AMD Radeon RX 7900 XTX"),
(0x744C, 0xCC, "AMD Radeon RX 7900 XT"),
(0x7448, 0x00, "AMD Radeon Pro W7900"),
(0x745E, 0xCC, "AMD Radeon Pro W7800"),
(0x747E, 0xC8, "AMD Radeon RX 7800 XT"),
(0x747E, 0xFF, "AMD Radeon RX 7700 XT"),
(0x747E, 0xD8, "AMD Radeon RX 7800M"),
(0x7480, 0xC0, "AMD Radeon RX 7600 XT"),
(0x7480, 0xCF, "AMD Radeon RX 7600"),
(0x7480, 0xC1, "AMD Radeon RX 7700S"),
(0x7480, 0xC3, "AMD Radeon RX 7600S"),
(0x7480, 0xC7, "AMD Radeon RX 7600M XT"),
(0x7483, 0xCF, "AMD Radeon RX 7600M"),
(0x7480, 0x00, "AMD Radeon Pro W7600"),
(0x7489, 0x00, "AMD Radeon Pro W7500"),
(0x15BF, 0x00, "AMD Radeon 780M"),
(0x15BF, 0x01, "AMD Radeon 760M"),
(0x15BF, 0x02, "AMD Radeon 780M"),
(0x15BF, 0x03, "AMD Radeon 760M"),
(0x15BF, 0xC1, "AMD Radeon 780M"),
(0x15BF, 0xC2, "AMD Radeon 780M"),
(0x15BF, 0xC3, "AMD Radeon 760M"),
(0x15BF, 0xC4, "AMD Radeon 780M"),
(0x15BF, 0xC5, "AMD Radeon 740M"),
(0x15BF, 0xC6, "AMD Radeon 780M"),
(0x15BF, 0xC7, "AMD Radeon 780M"),
(0x15BF, 0xC8, "AMD Radeon 760M"),
(0x15BF, 0xC9, "AMD Radeon 780M"),
(0x15BF, 0xCA, "AMD Radeon 740M"),
(0x15BF, 0xCB, "AMD Radeon 760M"),
(0x15BF, 0xCC, "AMD Radeon 740M"),
(0x15BF, 0xCD, "AMD Radeon 760M"),
(0x15BF, 0xCF, "AMD Radeon 780M"),
(0x15BF, 0xD0, "AMD Radeon 780M"),
(0x15BF, 0xD1, "AMD Radeon 780M"),
(0x15BF, 0xD2, "AMD Radeon 780M"),
(0x15BF, 0xD3, "AMD Radeon 780M"),
(0x15BF, 0xD4, "AMD Radeon 780M"),
(0x15BF, 0xD5, "AMD Radeon 760M"),
(0x15BF, 0xD6, "AMD Radeon 760M"),
(0x15BF, 0xD7, "AMD Radeon 780M"),
(0x15BF, 0xD8, "AMD Radeon 740M"),
(0x15BF, 0xD9, "AMD Radeon 780M"),
(0x15BF, 0xDA, "AMD Radeon 780M"),
(0x15BF, 0xDB, "AMD Radeon 760M"),
(0x15BF, 0xDC, "AMD Radeon 760M"),
(0x15BF, 0xDD, "AMD Radeon 780M"),
(0x15BF, 0xDE, "AMD Radeon 740M"),
(0x15BF, 0xDF, "AMD Radeon 760M"),
(0x15BF, 0xF0, "AMD Radeon 760M"),
(0x1900, 0x01, "AMD Radeon 780M"),
(0x1900, 0x02, "AMD Radeon 760M"),
(0x1900, 0x03, "AMD Radeon 780M"),
(0x1900, 0x04, "AMD Radeon 760M"),
(0x1900, 0x05, "AMD Radeon 780M"),
(0x1900, 0x06, "AMD Radeon 780M"),
(0x1900, 0x07, "AMD Radeon 760M"),
(0x1900, 0xB0, "AMD Radeon 780M"),
(0x1900, 0xB1, "AMD Radeon 780M"),
(0x1900, 0xB2, "AMD Radeon 780M"),
(0x1900, 0xB3, "AMD Radeon 780M"),
(0x1900, 0xB4, "AMD Radeon 780M"),
(0x1900, 0xB5, "AMD Radeon 780M"),
(0x1900, 0xB6, "AMD Radeon 780M"),
(0x1900, 0xB7, "AMD Radeon 760M"),
(0x1900, 0xB8, "AMD Radeon 760M"),
(0x1900, 0xB9, "AMD Radeon 780M"),
(0x1900, 0xC0, "AMD Radeon 780M"),
(0x1900, 0xC1, "AMD Radeon 760M"),
(0x1900, 0xC2, "AMD Radeon 780M"),
(0x1900, 0xC3, "AMD Radeon 760M"),
(0x1900, 0xC4, "AMD Radeon 780M"),
(0x1900, 0xC5, "AMD Radeon 780M"),
(0x1900, 0xC6, "AMD Radeon 760M"),
(0x1900, 0xC7, "AMD Radeon 780M"),
(0x1900, 0xC8, "AMD Radeon 760M"),
(0x1900, 0xC9, "AMD Radeon 780M"),
(0x1900, 0xCA, "AMD Radeon 760M"),
(0x1900, 0xCB, "AMD Radeon 780M"),
(0x1900, 0xCC, "AMD Radeon 780M"),
(0x1900, 0xCD, "AMD Radeon 760M"),
(0x1900, 0xCE, "AMD Radeon 780M"),
(0x1900, 0xCF, "AMD Radeon 760M"),
(0x1900, 0xD0, "AMD Radeon 780M"),
(0x1900, 0xD1, "AMD Radeon 760M"),
(0x1900, 0xD2, "AMD Radeon 780M"),
(0x1900, 0xD3, "AMD Radeon 760M"),
(0x1900, 0xD4, "AMD Radeon 780M"),
(0x1900, 0xD5, "AMD Radeon 780M"),
(0x1900, 0xD6, "AMD Radeon 760M"),
(0x1900, 0xD7, "AMD Radeon 780M"),
(0x1900, 0xD8, "AMD Radeon 760M"),
(0x1900, 0xD9, "AMD Radeon 780M"),
(0x1900, 0xDA, "AMD Radeon 760M"),
(0x1900, 0xDB, "AMD Radeon 780M"),
(0x1900, 0xDC, "AMD Radeon 780M"),
(0x1900, 0xDD, "AMD Radeon 760M"),
(0x1900, 0xDE, "AMD Radeon 780M"),
(0x1900, 0xDF, "AMD Radeon 760M"),
(0x1900, 0xF0, "AMD Radeon 780M"),
(0x1900, 0xF1, "AMD Radeon 780M"),
(0x1900, 0xF2, "AMD Radeon 780M"),
(0x1901, 0xC8, "AMD Radeon 740M"),
(0x1901, 0xC9, "AMD Radeon 740M"),
(0x1901, 0xD5, "AMD Radeon 740M"),
(0x1901, 0xD6, "AMD Radeon 740M"),
(0x1901, 0xD7, "AMD Radeon 740M"),
(0x1901, 0xD8, "AMD Radeon 740M"),
(0x15C8, 0xC1, "AMD Radeon 740M"),
(0x15C8, 0xC2, "AMD Radeon 740M"),
(0x15C8, 0xC3, "AMD Radeon 740M"),
(0x15C8, 0xC4, "AMD Radeon 740M"),
(0x15C8, 0xD1, "AMD Radeon 740M"),
(0x15C8, 0xD2, "AMD Radeon 740M"),
(0x15C8, 0xD3, "AMD Radeon 740M"),
(0x15C8, 0xD4, "AMD Radeon 740M"),
(0x1901, 0xC1, "AMD Radeon 740M"),
(0x1901, 0xC2, "AMD Radeon 740M"),
(0x1901, 0xC3, "AMD Radeon 740M"),
(0x1901, 0xC6, "AMD Radeon 740M"),
(0x1901, 0xC7, "AMD Radeon 740M"),
(0x1901, 0xD1, "AMD Radeon 740M"),
(0x1901, 0xD2, "AMD Radeon 740M"),
(0x1901, 0xD3, "AMD Radeon 740M"),
(0x1901, 0xD4, "AMD Radeon 740M"),
(0x150E, 0xC1, "AMD Radeon 890M"),
(0x150E, 0xC4, "AMD Radeon 890M"),
(0x150E, 0xC5, "AMD Radeon 890M"),
(0x150E, 0xC6, "AMD Radeon 890M"),
(0x150E, 0xD1, "AMD Radeon 890M"),
(0x150E, 0xD2, "AMD Radeon 890M"),
(0x150E, 0xD3, "AMD Radeon 890M"),
(0x74A9, 0x00, "AMD Instinct MI300XHF"),
(0x73AE, 0x00, "AMD Radeon Pro V620 MxGPU"),
(0x73CE, 0xFF, "AMD Radeon V520 MxGPU"),
(0x7449, 0x00, "AMD Radeon Pro W7800 48GB"),
(0x744A, 0x00, "AMD Radeon Pro W7900"),
(0x7480, 0xC2, "AMD Radeon RX 7650 GRE"),
(0x7481, 0xC7, "AMD Radeon RX 7600"),
(0x1900, 0xBA, "AMD Radeon 780M"),
(0x1900, 0xBB, "AMD Radeon 780M"),
(0x1901, 0xCA, "AMD Radeon 740M"),
(0x1586, 0xC1, "AMD Radeon 8060S"),
(0x1586, 0xC2, "AMD Radeon 8050S"),
(0x1586, 0xC4, "AMD Radeon 8050S"),
(0x1586, 0xD1, "AMD Radeon 8060S"),
(0x1586, 0xD2, "AMD Radeon 8050S"),
(0x1586, 0xD4, "AMD Radeon 8050S"),
(0x1586, 0xD5, "AMD Radeon 8040S"),
(0x1114, 0xC2, "AMD Radeon 860M"),
(0x1114, 0xC3, "AMD Radeon 840M"),
(0x1114, 0xD2, "AMD Radeon 860M"),
(0x1114, 0xD3, "AMD Radeon 840M"),
(0x7550, 0xC0, "AMD Radeon RX 9070 XT"),
(0x7550, 0xC3, "AMD Radeon RX 9070"),
];

101
src/collection/batteries.rs Normal file
View File

@ -0,0 +1,101 @@
//! Uses the battery crate.
//!
//! Covers battery usage for:
//! - Linux 2.6.39+
//! - MacOS 10.10+
//! - iOS
//! - Windows 7+
//! - FreeBSD
//! - DragonFlyBSD
//!
//! For more information, refer to the [starship_battery](https://github.com/starship/rust-battery) repo/docs.
use starship_battery::{
Battery, Manager, State,
units::{power::watt, ratio::percent, time::second},
};
/// Battery state.
#[derive(Debug, Clone)]
pub enum BatteryState {
Charging {
/// Time to full in seconds.
time_to_full: Option<u32>,
},
Discharging {
/// Time to empty in seconds.
time_to_empty: Option<u32>,
},
Empty,
Full,
Unknown,
}
impl BatteryState {
/// Return the string representation.
pub fn as_str(&self) -> &'static str {
match self {
BatteryState::Charging { .. } => "Charging",
BatteryState::Discharging { .. } => "Discharging",
BatteryState::Empty => "Empty",
BatteryState::Full => "Full",
BatteryState::Unknown => "Unknown",
}
}
}
#[derive(Debug, Clone)]
pub struct BatteryData {
/// Current charge percent.
pub charge_percent: f64,
/// Power consumption, in watts.
pub power_consumption: f64,
/// Reported battery health.
pub health_percent: f64,
/// The current battery "state" (e.g. is it full, charging, etc.).
pub state: BatteryState,
}
impl BatteryData {
pub fn watt_consumption(&self) -> String {
format!("{:.2}W", self.power_consumption)
}
pub fn health(&self) -> String {
format!("{:.2}%", self.health_percent)
}
}
pub fn refresh_batteries(manager: &Manager, batteries: &mut [Battery]) -> Vec<BatteryData> {
batteries
.iter_mut()
.filter_map(|battery| {
if manager.refresh(battery).is_ok() {
Some(BatteryData {
charge_percent: f64::from(battery.state_of_charge().get::<percent>()),
power_consumption: f64::from(battery.energy_rate().get::<watt>()),
health_percent: f64::from(battery.state_of_health().get::<percent>()),
state: match battery.state() {
State::Unknown => BatteryState::Unknown,
State::Charging => BatteryState::Charging {
time_to_full: {
let optional_time = battery.time_to_full();
optional_time.map(|time| f64::from(time.get::<second>()) as u32)
},
},
State::Discharging => BatteryState::Discharging {
time_to_empty: {
let optional_time = battery.time_to_empty();
optional_time.map(|time| f64::from(time.get::<second>()) as u32)
},
},
State::Empty => BatteryState::Empty,
State::Full => BatteryState::Full,
},
})
} else {
None
}
})
.collect::<Vec<_>>()
}

View File

@ -18,6 +18,3 @@ pub struct CpuData {
}
pub type CpuHarvest = Vec<CpuData>;
pub type PastCpuWork = f64;
pub type PastCpuTotal = f64;

View File

@ -0,0 +1,42 @@
//! CPU stats through sysinfo.
//! Supports FreeBSD.
use sysinfo::System;
use super::{CpuData, CpuDataType, CpuHarvest};
use crate::collection::error::CollectionResult;
pub fn get_cpu_data_list(sys: &System, show_average_cpu: bool) -> CollectionResult<CpuHarvest> {
let mut cpus = vec![];
if show_average_cpu {
let cpu = sys.global_cpu_info();
cpus.push(CpuData {
data_type: CpuDataType::Avg,
cpu_usage: cpu.cpu_usage() as f64,
})
}
cpus.extend(
sys.cpus()
.iter()
.enumerate()
.map(|(i, cpu)| CpuData {
data_type: CpuDataType::Cpu(i),
cpu_usage: cpu.cpu_usage() as f64,
})
.collect::<Vec<_>>(),
);
Ok(cpus)
}
#[cfg(target_family = "unix")]
pub(crate) fn get_load_avg() -> crate::collection::cpu::LoadAvgHarvest {
// The API for sysinfo apparently wants you to call it like this, rather than
// using a &System.
let sysinfo::LoadAvg { one, five, fifteen } = sysinfo::System::load_average();
[one as f32, five as f32, fifteen as f32]
}

View File

@ -103,26 +103,26 @@ pub fn keep_disk_entry(
disk_name: &str, mount_point: &str, disk_filter: &Option<Filter>, mount_filter: &Option<Filter>,
) -> bool {
match (disk_filter, mount_filter) {
(Some(d), Some(m)) => match (d.is_list_ignored, m.is_list_ignored) {
(Some(d), Some(m)) => match (d.ignore_matches(), m.ignore_matches()) {
(true, true) => !(d.has_match(disk_name) || m.has_match(mount_point)),
(true, false) => {
if m.has_match(mount_point) {
true
} else {
d.keep_entry(disk_name)
d.should_keep(disk_name)
}
}
(false, true) => {
if d.has_match(disk_name) {
true
} else {
m.keep_entry(mount_point)
m.should_keep(mount_point)
}
}
(false, false) => d.has_match(disk_name) || m.has_match(mount_point),
},
(Some(d), None) => d.keep_entry(disk_name),
(None, Some(m)) => m.keep_entry(mount_point),
(Some(d), None) => d.should_keep(disk_name),
(None, Some(m)) => m.should_keep(mount_point),
(None, None) => true,
}
}
@ -158,25 +158,10 @@ mod test {
#[test]
fn test_keeping_disk_entry() {
let disk_ignore = Some(Filter {
is_list_ignored: true,
list: vec![Regex::new("nvme").unwrap()],
});
let disk_keep = Some(Filter {
is_list_ignored: false,
list: vec![Regex::new("nvme").unwrap()],
});
let mount_ignore = Some(Filter {
is_list_ignored: true,
list: vec![Regex::new("boot").unwrap()],
});
let mount_keep = Some(Filter {
is_list_ignored: false,
list: vec![Regex::new("boot").unwrap()],
});
let disk_ignore = Some(Filter::new(true, vec![Regex::new("nvme").unwrap()]));
let disk_keep = Some(Filter::new(false, vec![Regex::new("nvme").unwrap()]));
let mount_ignore = Some(Filter::new(true, vec![Regex::new("boot").unwrap()]));
let mount_keep = Some(Filter::new(false, vec![Regex::new("boot").unwrap()]));
assert_eq!(run_filter(&None, &None), vec![0, 1, 2, 3, 4]);

View File

@ -5,10 +5,8 @@ use std::io;
use hashbrown::HashMap;
use serde::Deserialize;
use super::{keep_disk_entry, DiskHarvest, IoHarvest};
use crate::data_collection::{
deserialize_xo, disks::IoData, error::CollectionResult, DataCollector,
};
use super::{DiskHarvest, IoHarvest, keep_disk_entry};
use crate::collection::{DataCollector, deserialize_xo, disks::IoData, error::CollectionResult};
#[derive(Deserialize, Debug, Default)]
#[serde(rename_all = "kebab-case")]
@ -29,7 +27,6 @@ struct FileSystem {
pub fn get_io_usage() -> CollectionResult<IoHarvest> {
// TODO: Should this (and other I/O collectors) fail fast? In general, should
// collection ever fail fast?
#[allow(unused_mut)]
let mut io_harvest: HashMap<String, Option<IoData>> =
get_disk_info().map(|storage_system_information| {
storage_system_information
@ -41,7 +38,7 @@ pub fn get_io_usage() -> CollectionResult<IoHarvest> {
#[cfg(feature = "zfs")]
{
use crate::data_collection::disks::zfs_io_counters;
use crate::collection::disks::zfs_io_counters;
if let Ok(zfs_io) = zfs_io_counters::zfs_io_stats() {
for io in zfs_io.into_iter() {
let mount_point = io.device_name().to_string_lossy();

View File

@ -1,7 +1,7 @@
//! Fallback disk info using sysinfo.
use super::{keep_disk_entry, DiskHarvest};
use crate::data_collection::DataCollector;
use super::{DiskHarvest, keep_disk_entry};
use crate::collection::DataCollector;
pub(crate) fn get_disk_usage(collector: &DataCollector) -> anyhow::Result<Vec<DiskHarvest>> {
let disks = &collector.sys.disks;

View File

@ -24,8 +24,8 @@ cfg_if::cfg_if! {
use file_systems::*;
use usage::*;
use super::{keep_disk_entry, DiskHarvest};
use crate::data_collection::DataCollector;
use super::{DiskHarvest, keep_disk_entry};
use crate::collection::DataCollector;
/// Returns the disk usage of the mounted (and for now, physical) disks.
pub fn get_disk_usage(collector: &DataCollector) -> anyhow::Result<Vec<DiskHarvest>> {

View File

@ -88,7 +88,7 @@ impl FileSystem {
matches!(self, FileSystem::Other(..))
}
#[allow(dead_code)]
#[expect(dead_code)]
#[inline]
/// Returns a string literal identifying this filesystem.
pub fn as_str(&self) -> &str {
@ -122,7 +122,6 @@ impl FromStr for FileSystem {
type Err = anyhow::Error;
#[inline]
fn from_str(s: &str) -> anyhow::Result<Self> {
// Done like this as `eq_ignore_ascii_case` avoids a string allocation.
Ok(if s.eq_ignore_ascii_case("ext2") {
@ -157,7 +156,7 @@ impl FromStr for FileSystem {
FileSystem::Bcachefs
} else if s.eq_ignore_ascii_case("minix") {
FileSystem::Minix
} else if s.eq_ignore_ascii_case("nilfs") {
} else if multi_eq_ignore_ascii_case!(s, "nilfs" | "nilfs2") {
FileSystem::Nilfs
} else if s.eq_ignore_ascii_case("xfs") {
FileSystem::Xfs

View File

@ -7,7 +7,7 @@ use std::{
str::FromStr,
};
use crate::data_collection::disks::IoCounters;
use crate::collection::disks::IoCounters;
/// Copied from the `psutil` sources:
///
@ -87,7 +87,7 @@ pub fn io_stats() -> anyhow::Result<Vec<IoCounters>> {
#[cfg(feature = "zfs")]
{
use crate::data_collection::disks::zfs_io_counters;
use crate::collection::disks::zfs_io_counters;
if let Ok(mut zfs_io) = zfs_io_counters::zfs_io_stats() {
results.append(&mut zfs_io);
}

Some files were not shown because too many files have changed in this diff Show More