Recently, NVIDIA CUDA repository packages started shipping only
`libnvidia-ml.so.1` file, without `libnvidia-ml.so`. The upstream
`nvml-wrapper` package has a fix proposed
(https://github.com/Cldfire/nvml-wrapper/pull/63), yet the package is
in search of a maintainer at the moment.
To allow `bottom` to correctly detect NVIDIA GPUs on Ubuntu with
official NVIDIA packages, add a wrapper around `Nvml::init` to be more
persistent in its search for the NVML library.
* clean up some battery stuff
* dedupe battery from data conversion
* idk why we had a Value type alias
* clean up dupe load avg, and remove memory use percent from memharvest
* hmm
* nvm
* run a dep bump
* add widget border type
* feature: support custom widget borders
* fmt
* remove none since it looks really bad
* fix bug with title for tables with no title when expanded
* fix jsonschema
* fix some unused stuff
* refactor: lines
* shift around some stuff in Cargo.toml
* some docs
* some more cargo stuff
* clean up a bunch of stuff after making things less public
* clippy lints
* a lot more cleanup
* clippy
* fix some errors
* fix for windows
* refactor: separate schema generation to its own binary, go back to lib-bin setup
Decided it might be nicer to separate the schema generation bit to its
own binary. This does mean that we have to go back to the lib-bin
system, as otherwise passing shared code is _really_ hard.
* handle versioning
* run fmt
* refactor: ignore warning for deprecated panic hook from Rust 1.82.0
* refactor: bump 'msrv' to 1.81 and update deprecated code
* some more cleanup
* even more cleanup
I had changed how this was parsed in-code but I forgot to update the default configs. This also adds some e2e tests to hopefully catch this all for real in the future, since the schema ones don't catch this stuff and the constants test doesn't actually run the binary for a proper e2e test.
Ideally we minimize our usage of Cirrus CI, especially for typical PR CI workflows, since it's a bit cludgy to work with. This method is also more extendable to things like OpenBSD.
Fine for deploys I guess since that's not super frequent and at this point I have that working fairly well when automated + I don't usually have to wait for it.
Actually support $XDG_CONFIG_HOME on macOS. Apparently in our docs we also say we do, but we, uh, don't, because dirs doesn't.
Note this is backwards-compatible, in that if a config file exists in the old default locations, we will check those first.
* other: show N/A for Nvidia GPUs if we detect one but can't get the temperature
* refactor: driveby refactor of filter system and code for temp
* missed one
* update changelog
* add another lib test to make sure valid integration configs are actually valid
* only test these on default config
* clippy
* add extra CI fail check
* fix windows
* bug: fix occasionally wrong runtime reported by sysinfo
Seems like on other platforms, sysinfo will sometimes report a run time
that starts from UNIX epoch - this gives a non-sensical value of 19000+
days, and it at least looks a little more reasonable to just return 0 in
this case. I guess we can also make it return N/A in the future but this
is a quick fix for now.
* update changelog
Basically, I did:
```
long = "blah blah blah"
```
but it should have been:
```
long,
long_help = "blah blah blah"
```
The former makes the _long help flag_ the description which... well,
isn't right.
Updates some outdated docs on filtering, and adds some tests as well. In particular, this also adds a cfg_attr on tests to try and catch unknown fields; we'll be more lenient in prod builds though and allow them.