* move hidden login form to the left to hide overlayed
Lastpass context buttons
* do dropdown toggling for login form manually to
not only ignore clicks into form but also into overlayed
Lastpass context buttons for closing the dropdown again;
do close dropdown for included a and button elements
though
* Added/changed mappings of profile to engine settings to
match Cura Legacy mapping:
* perimeterBeforeInfill: taken from perimeter_before_infill (new,
fixes#1693)
* skinSpeed: taken from solidarea_speed (new)
* raftAirGapLayer0: sum of raft_airgap and raft_airgap_all
* raftAirGap: taken from raft_airgap_all (new)
* raftFanSpeed: changed to 0
* raftSurfaceThickness: taken from raft_surface_thickness (new)
* raftSurfaceLinewidth & raftSurfaceLineSpacing: taken from
raft_surface_linewidth (new)
* Mach3 Gcode Flavor replaces S parameter with P parameter in
temperature commands within generated GCODE, like in Cura
Legacy
If the slicer returns values for a tool we want it in our analysis
result, even if it's zero. That way the result will be the same as
if we have our own built in gcode analyser take a look at the
file.
(cherry picked from commit 818ae92)
This fixes the issue that there were no informations about filament
usage in metadata after slicing with cura plugin. Trying to call
profile.get_float("filament_diameter") ended in en exception with
message " 'module' object has no attribute 'get_float' ". So i defined
profile before using profile and now it works.
See issue #1685
Also inserted a check to determine if filament usage is > 0 to exclude
tools with no filament usage in metadata.
(cherry picked from commit c9b38bd)
* Properly handle G0/G1 with no X, Y, Z coordinates in relative mode
instead of duplicating coordinates - should fix#1675
* Only take move commands with X, Y, Z coordinates into account for
model size calculation - this makes our internal GCODE analysis behave
like the GCODE viewer's analysis and produce the same model size. The
downside is that extrusions on the origin are no longer taken into account
for checking if a model is within bounds of the print bed, but that should
hopefully not produce any issues in the real world.
--basedir, --config, --verbose, --safe may now come before or after
subcommands and should still be evaluated.
For the server commands (legacy, "server" and "daemon"), the same
should now hold true for the related parameters --host, --port, --debug,
--logging, --iknowwhatimdoing and also --pid (for daemon command).
While having the parameters belong to the individual commands and only
there (which is click's basic approach) is way more cleaner, too many people
were running into issues with that strict approach after all.
I just hope the somewhat hackish approach with context injection needed to
get the less strict version to work won't backfire badly in the long run.
See also #1633 and #1657
--version is a flag, not an actual parameter (wouldn't really make sense
too). I'm not sure why that isn't the default behaviour of the built-in
version_option decorator tbh.
See #1647
Two problems solved:
* Make sure to only process temperature data once we
have printer profile information on hand to evaluate
the heater data. If we don't have that yet, create a client
side backlog and process that once we have the necessary
data on hand.
* Do not use uninitialized history cutoff values - if our cutoff
value hasn't yet synced (no settings response arrived yet),
just don't perform the cutoff.
That api endpoint really is a tough nut. ETag calculation now also
takes full settings dump from settings plugins into account, because
those might be providing custom keys through custom on_settings_load
implementations, for which we will not notice any changes if we are
only looking at the effective config.
Of course, the more we put into that ETag calculation, the slower it will
be and the less sense it will make. Somewhat annoying :/
Not testing if oldRoot was actually set and contained the
key in question could cause issues if a completely new data
structure was sent to the backend that was not mirrored by
the default settings. Things like e.g. complex configuration
items in a by default empty object.
sarge's "wait_events" is unreliable. If an asynchronous
job is started but stops immediately and raises a sarge
Exception (inside the async thread), the associated
command's event will never be set event though the
process finished. So we'd wait indefinitely here.
We circumvent this by first waiting until the commands
are parsed and processed (p.commands contains
elements), then until said commands are started and then
making sure the command's process is actually set. Only
then do we actually have a background process running
that we'll be able to monitor further down, otherwise
the command immediately failed.
Removed a potential deadlock, added logging for all
raised exceptions, made _to_error more solid and
removed another potential encoding issue when
creating diffs
Having that output stay on stderr and hence in shiny red looks way
too alarming considering that it's only a pip update that is not THAT
critical usually (and we don't want to do it automatically anyhow
considering how often that appears to break stuff).