If the slicer returns values for a tool we want it in our analysis
result, even if it's zero. That way the result will be the same as
if we have our own built in gcode analyser take a look at the
file.
(cherry picked from commit 818ae92)
This fixes the issue that there were no informations about filament
usage in metadata after slicing with cura plugin. Trying to call
profile.get_float("filament_diameter") ended in en exception with
message " 'module' object has no attribute 'get_float' ". So i defined
profile before using profile and now it works.
See issue #1685
Also inserted a check to determine if filament usage is > 0 to exclude
tools with no filament usage in metadata.
(cherry picked from commit c9b38bd)
* Properly handle G0/G1 with no X, Y, Z coordinates in relative mode
instead of duplicating coordinates - should fix#1675
* Only take move commands with X, Y, Z coordinates into account for
model size calculation - this makes our internal GCODE analysis behave
like the GCODE viewer's analysis and produce the same model size. The
downside is that extrusions on the origin are no longer taken into account
for checking if a model is within bounds of the print bed, but that should
hopefully not produce any issues in the real world.
Variables defined in an outer scope can no longer be set from an inner scope (see
pallets/jinja#641). Regardless of whether that is right or wrong, we can't control if
people are using such constructs in their plugins, which versions of Jinja >= 2.9 would
now break out of the blue, regardless of OctoPrint version. That is unacceptable sadly
and requires pinning for now, until plugin authors have had a chance to adapt
accordingly.
Also see #1697.
--basedir, --config, --verbose, --safe may now come before or after
subcommands and should still be evaluated.
For the server commands (legacy, "server" and "daemon"), the same
should now hold true for the related parameters --host, --port, --debug,
--logging, --iknowwhatimdoing and also --pid (for daemon command).
While having the parameters belong to the individual commands and only
there (which is click's basic approach) is way more cleaner, too many people
were running into issues with that strict approach after all.
I just hope the somewhat hackish approach with context injection needed to
get the less strict version to work won't backfire badly in the long run.
See also #1633 and #1657
--version is a flag, not an actual parameter (wouldn't really make sense
too). I'm not sure why that isn't the default behaviour of the built-in
version_option decorator tbh.
See #1647
Two problems solved:
* Make sure to only process temperature data once we
have printer profile information on hand to evaluate
the heater data. If we don't have that yet, create a client
side backlog and process that once we have the necessary
data on hand.
* Do not use uninitialized history cutoff values - if our cutoff
value hasn't yet synced (no settings response arrived yet),
just don't perform the cutoff.
That api endpoint really is a tough nut. ETag calculation now also
takes full settings dump from settings plugins into account, because
those might be providing custom keys through custom on_settings_load
implementations, for which we will not notice any changes if we are
only looking at the effective config.
Of course, the more we put into that ETag calculation, the slower it will
be and the less sense it will make. Somewhat annoying :/
Not testing if oldRoot was actually set and contained the
key in question could cause issues if a completely new data
structure was sent to the backend that was not mirrored by
the default settings. Things like e.g. complex configuration
items in a by default empty object.
sarge's "wait_events" is unreliable. If an asynchronous
job is started but stops immediately and raises a sarge
Exception (inside the async thread), the associated
command's event will never be set event though the
process finished. So we'd wait indefinitely here.
We circumvent this by first waiting until the commands
are parsed and processed (p.commands contains
elements), then until said commands are started and then
making sure the command's process is actually set. Only
then do we actually have a background process running
that we'll be able to monitor further down, otherwise
the command immediately failed.
Removed a potential deadlock, added logging for all
raised exceptions, made _to_error more solid and
removed another potential encoding issue when
creating diffs