Recently, I found a bug where the Ultimaker S-line firmware will hang and fail to print 3D models with a complex bottom layer or no ;LAYER:
comments. When these conditions influenced by the 3D model and exported G-code worsen, the footprint discovery will not finish within a time limit and the Ultimaker printer requires a soft reset.
What the user sees will be:
Ultimaker S-line printer homes the printhead and bed.
The bed and nozzles preheat.
The bed raises for bed probing using the nozzles.
The bed stops moving before touching the probing nozzle. The printer will wait.
After a few minutes, the bed Z-axis stepper motor is powered down due to inactivity and the bed may slide down.
Eventually the printer will show an error:
An unspecified error has occurred in the motion controller. Restore to default settings.
The printer user interface will show one action available to Reboot. After rebooting and trying to run the print, it’s clear that the same error will occur.
Note: I added AI generated header images to break up the walls of text in this post and keep things interesting for non-technical readers.
Ultimaker S-line 3D printers run both a linux kernel and Marlin firmware. The touch screen user interface and networking is handled by the Linux “management” side and the printer motion controller code is based on Marlin. The last public documentation on Ultimaker’s architecture is the helpful Inside the Ultimaker 3 - Day 4 - Electronics post in 2016 on the Ultimaker forum.
This is in contrast to the Ultimaker Original+ and 2 that use an embedded Atmega2560 microcontroller running Marlin to control all aspects of the printer.
Fortunately, the Ultimaker S-line has logging functionality built into Linux management side and a logdump is written to the plugged in USB drive before rebooting.
After opening the compressed logdump, I found dmesg log in the boot.log
series of files. The dmesg shows all saved logs up to the last error that triggered the logdump. I was able to located the relevant portions of log that pertained to the last failed print attempt by searching for the filename of the failed print file.
INF - parseHeaderStep:70 - Handling file '/media/usb0/XXX'
Looking further up in the log, the USB flash drive detection event can be found.
The following lines of the log show the printer calibration and preheating status. There may be a couple of lines with “errors” due to lack of internet connectivity or Ultimaker subscription. I’m using an Ultimaker S7 and S5 at school. The first obvious sign of trouble is this line.
WAR - transportLayer:232 - Got error line 1 from printer: Error:SERIAL_INPUT_TIMEOUT: No commands received over time. Safety shutdown
followed by the next lines in between the AUTO_LEVEL_BED
abort messages
ERR - controller:956 - Halting ALL procedures
ERR - applicationLayer:324 - Marlin error: SYSTEM HALT!
At this point, the printer asks you to reboot it.
There’s not much to go off of besides SERIAL_INPUT_TIMEOUT
which looks like a constant value.
I have contributed to the Marlin 3D printer firmware project before and Ultimaker was a main contributor to Marlin firmware in the past.
I’m not sure how actively Ultimaker contributes to the open source version of Marlin today but it’s nice to see that Marlin lives on in their 3D printer line up.
Ultimaker has released the source code of their version of Marlin called UltimakerMarlin. There is a branch in the UltimakerMarlin Github repo named S-line
with the last update in February 2021. I searched the code for SERIAL_INPUT_TIMEOUT
and the resulting code shows that check_serial_input_timeout()
checks if the time since the last serial data input has been over MONITOR_SERIAL_INPUT_TIMEOUT
seconds, stop()
is called with the reason STOP_REASON_SERIAL_INPUT_TIMEOUT
. stop()
then sends the serial message through SERIAL_ECHOLNPGM
which the Marlin developers recognize.
MONITOR_SERIAL_INPUT_TIMEOUT
is defined as 5 minutes in the most recent released code.
Ultimaker has added a message protocol in UltimakerMarlin to communicate with the Linux side of the printer with messages formatted a standard way.
Based on what we’ve found so far, it’s highly likely UltimakerMarlin — which runs the motion controller side of the printer — stopped receiving data from the Linux “management/UI” side for at least 5 minutes.
I narrowed down which logged abnormalities were likely to have caused the Linux management side to stop sending messages to the motion controller by comparing the logs of a successful print with a failed print.
Remember that the printer dumps all recent saved logs?
Another user of the printer had successfully finished a print and I found the logs from a past print job further up in the boot.log
file.
Note: Ultimaker logs do not appear to show any private data. The model filename and optimized probing points are the only identifying info. The software states at the time of errors seem to be sent to Sentry, crash analytics service, if the printer is connected to the internet. Unlike other 3D printer companies, Ultimaker appears to be transparent about their logging and not obsfucating logs.
I viewed both logs from the failed and successful prints side by side and first scrolled to the chronological location where the SERIAL_INPUT_TIMEOUT
was found. The error shows up 3 lines after the following unique BedLevelProbingProcedure
step is logged in both the failed and successful prints.
INF - procedure:489 - BedLevelProbingProcedure(key='AUTO_LEVEL_BED', outcome=None) transitioning from 'ProbeSingleNozzleOffsetStep(key='PROBE_Z_OFFSET_FOR_VALIDATION_0')' > 'GotoPositionStep(key='GO_TO_SAFE_TRAVEL_HEIGHT_STEP')'
INF - printerService:195 - Procedure next step: AUTO_LEVEL_BED: GO TO SAFE TRAVEL HEIGHT STEP
INF - gotoPositionStep:86 - Moving to: x:None y:None z:20 e:None speed:None, relative:False, immediate:True
The bed probing procedure matches what I observed in real life. The printer homes the printhead and bed, preheats the bed and nozzles, and raises the bed along the Z axis towards the nozzle which is at the 0
Z position.
Note: For those unfamiliar with the Ultimaker cartesian printer design, the higher Z position is at the bottom of the printer because the bed lowers away from the printhead which is fixed at the top of the printer case as the object is printed and gains height.
After the error message is received by the Ultimaker Linux management system, the next queued bed level procedure is started before management system starts halting procedures.
The only obvious difference between the successful and failed print files was the size. I had previously successfully printed models sliced in both Cura and PrusaSlicer so the slicer was unlikely to be the cause. Both slicers output standard G-code printing commands.
The failed print file was larger in size. Maybe the Ultimaker system could not handle a larger print file?
The logdump is the only user visible record of what events led up to the motion controller crash and forced printer reboot. We need to use intuition and find more patterns in the logs we have.
The filesize of the printing G-code file is the only factor that increases processing time of the Ultimaker. The maximum axes, number of extruders, materials, and, bed probe area have set upper bounds that do not increase with printing G-code size.
The file size shouldn’t matter since G-code is newline delimited and the memory efficient way to process the G-code is to read and execute a single line at a time.
Was the Ultimaker preprocessing the file for a purpose and this could bog down the Linux management system?
INF - gCodeFootprintFinder:69 - PARSED LAYER: ['LAYER', '0']
INF - gCodeFootprintFinder:69 - PARSED LAYER: ['LAYER', '1']
The above events are logged in the successful print.
Wait. The Ultimaker is parsing layer 0 and layer 1 as part of the bed probe optimization. By determining the minimum footprint of the printed object that touches the bed, bed probing could cover only the printed area which can save time.
A footprint completion event for the successful print:
INF - footprintProbeGridComputer:51 - Parsed the GCode to find the footprint coordinates in 1.2092304229736328 seconds
A footprint completion event for the failed print:
INF - footprintProbeGridComputer:51 - Parsed the GCode to find the footprint coordinates in 696.0990624427795 seconds
696 seconds is a long time. It’s longer than the motion controller timeout MONITOR_SERIAL_INPUT_TIMEOUT
which is 300 seconds (5 minutes).
While the footprint finder runs, a couple of synchronization warnings are logged.
WAR - timer:257 - Timer(PrintProcedureMetadataHelper.onChanged.timer) ran more than a second out of sync! (-1.116920)
The footprint finder takes too long and blocks the Linux management controller from sending new messages to the Marlin motion controller.
Eventually the footprint calculation will finish and the printer will move to the next queued step in the AUTO_LEVEL_BED
procedure. After the printer has processed the next queued step, it will get around to processing the motion controller SERIAL_INPUT_TIMEOUT
error that occurs because the footprint calculation blocks message sending for more than 5 minutes.
After SERIAL_INPUT_TIMEOUT
is received, the Ultimaker management system logs an attempt to end the footprint calculation as STOP_FOOTPRINT_COMPUTER
and probe the entire bed in lieu of probing a smaller area to fit the footprint.
INF - procedure:489 - BedLevelProbingProcedure(key='AUTO_LEVEL_BED', outcome=OutcomeBase.Aborted) transitioning from 'SwitchActiveHotendStep(key='SWITCH_HOTEND_FOR_VALIDATION_0')' > 'CallbackStep(key='STOP_FOOTPRINT_COMPUTER')'
INF - printerService:195 - Procedure next step: AUTO_LEVEL_BED: STOP_FOOTPRINT_COMPUTER
INF - footprintProbeGridComputer:73 - Active thread was still running, setting all cells to probe
INF - gCodeFootprintFinder:31 - Received call to stop footprint computation
STOP_FOOTPRINT_COMPUTER
is sent as soon as the Marlin motion controller error is received so the decision to abort the footprint calculation is sent upon a motion controller error and STOP_FOOTPRINT_COMPUTER
is not premptively invoked before the 5 minute timeout window.
After a motion controller error, the only UI action available is to reboot which is a soft reset that does not require physically resetting the printer power switch.
I assume that the single threaded nature of Python locks execution during the footprint calcuation and blocks the message system also running in Python from talking to the motion controller.
We could end here and call it a day but let’s see if we can figure out where the bug resides in the Ultimaker system. Maybe we will find a workaround in the current firmware and possible patches! If you want to see the G-code workaround now, feel free to scroll to the end of this post.
Ultimaker has helpfully included line numbers in logged events. So all we need to initially do is follow the footprints.
Starting with the first footprint event
INF - bedLevelProbingProcedure:306 - Using footprint probing
Line 306
of bedLevelProbingProcedure.py
is in the abbreviated function prepareFootprintProbing()
below.
def prepareFootprintProbing(self) -> None:
try:
print_procedure = cast(PrintProcedure, self.__controller.getProcedure("PRINT"))
gcode_metadata = print_procedure.getGcodeMetaData()
except ValueError:
gcode_metadata = None
if gcode_metadata is not None and gcode_metadata.getGroupCount() <= 1:
log.info("Using footprint probing") # <--- Line 306
self.__footprint_probe_grid_computer = FootprintProbeGridComputer(self.__controller)
self.__footprint_probe_grid_computer.computeFootprintProbeGrid(probe_grid=self.__probe_grid)
prepareFootprintProbing()
starts the footprint discovery by calling computeFootprintProbeGrid(probe_grid: ProbeGrid)
.
computeFootprintProbeGrid(probe_grid: ProbeGrid)
first calls findFootprintCoordinates(stream: IO[bytes]) -> List[List[float]]
to get all movement command coordinates (G0
and G1
only, no G2
/G3
arc movements).
def findFootprintCoordinates(self, stream: IO[bytes]) -> List[List[float]]:
self.__reset()
for line in stream:
self.__process(line)
if self.__should_stop:
break
return self.__coordinates
The file stream is read line by line until ;LAYER:N
is found with a number N
greater than 0
. During this loop to read the stream, a variable __should_stop
is checked to see if the footprint coordinate finding should stop. __should_stop
is set to True
when a layer marker with a layer number greater than 0 is found or when the stop()
function is called from finalizeComputation()
which in turn is called when STOP_FOOTPRINT_COMPUTER
message is received or footprint grid is needed on demand.
if self.__layer > 0:
self.__should_stop = True
return # Not processing this layer
def stop(self) -> None:
log.info("Received call to stop footprint computation")
self.__should_stop = True
After all the coordinates touched before layer > 0 are returned, computeFootprintProbeGrid(probe_grid: ProbeGrid)
calls ConvexHull.find(cls, points: List[List[float]]) -> List[Vector2]
to return a trimmed list of the convex hull coordinates (the outside coordinates ring of the all found coordinates) and updateWithBoundedSubSet(boundary_vectors: List[Vector2])
(I’ll call this the Grid Bounded SubSet) to figure out the actual grid cells to probe.
Unlike findFootprintCoordinates(stream: IO[bytes])
which ends early when STOP_FOOTPRINT_COMPUTER
message is received, the ConvexHull
and updateWithBoundedSubSet(boundary_vectors: List[Vector2])
algorithms do not check for early termination.
The Convex Hull algorithm used is Quickhull and has a worst case run time of O(N^2)
where N
is the number of coordinates before layer 1. Although the most likely worst run time is O(N)
if the printed model has a circular base.
I didn’t look at Grid Bounded SubSet algorithm too closely but it seems to have a similar worst case runtime of O(N^2)
.
Now that we are looking at the code, the log message we looked at previously that output the footprint coordinate G-code parsing time does not represent the total processing time to find a probing grid. The log entry with the parsing time is logged is after all coordinates in the file have been found in findFootprintCoordinates(stream: IO[bytes])
and before the Convex Hull and probing grid cells are calculated.
Now it makes sense why the log event for STOP_FOOTPRINT_COMPUTER
was found AFTER the log events for footprint coordinate G-code parsing time and motion and motion controller error.
INF - printerService:195 - Procedure next step: AUTO_LEVEL_BED: STOP_FOOTPRINT_COMPUTER
INF - footprintProbeGridComputer:73 - Active thread was still running, setting all cells to probe
def finalizeComputation(self) -> None:
if self._thread is not None:
if self._thread.is_alive():
log.info("Active thread was still running, setting all cells to probe")
self.__footprint_finder.stop()
self._thread.join()
self.__probe_grid.setCellsToProbe(1)
self.__probe_grid.updateQuickGridIndices(dimension=self.__probing_config.dimension)
The Convex Hull or Grid Bounded SubSet algorithm is running from when the parsing time message is logged to whenever each algorithm fully completes.
The Python thread does not join in time and updateQuickGridIndices(dimension: Dimension)
never runs (we know this because the new quick probe indices are not logged like they are in the successful print log).
After this attempt to early abort the footprint finder is unsuccessful the Ultimaker printer would normally wait for the footprint finder to end naturally.
ERR - applicationLayer:324 - Marlin error: SYSTEM HALT!
ERR - controller:956 - Halting ALL procedures
Due to the Marlin motion controller error, it determines an unsafe condition may have occured (e.g. motion controller keeps the heater on) so some more error messages are logged indicating halting more parts of the system. Then Ultimaker requires a reboot.
The Linux management controller could keep track of the time elapsed since the last message to the motion controller. It can also send STOP_FOOTPRINT_COMPUTER
and fall back to a full bed probe before the 5 minute timeout window expires if the communication system cannot be refactored around Python threading and blocking issues.
A reference to the __should_stop
variable can be passed to the Convex Hull and Grid Bounded SubSet algorithms. Each loop of the algorithm then checks the referenced variable to see if it should end early.
If we need to print with the current Ultimaker S-line firmware and we have a condition where we cannot change the 3D model or add proper ;LAYER
indicators, we need to supply the footprint finder with a simplified set of coordinates before ending it with our own ;LAYER:1
.
Ultimaker footprint finder G-code matching
# Only handle G0, G1 commands with X,Y movement
keys = gcode_line.getParameterDictionaryKeys()
if "G" in keys and gcode_line.getValue("G", return_type=str) in self.__allowed_commands:
if "X" in keys or "Y" in keys:
self.__coordinates.append(self.__currentPositionToCoordinates())
In general, the workaround requirements are:
Include G0
travel commands to the 4 corners of the bed (extruder movement is not needed) in our start G-code. Although these G-code will be found ahead of time by the footprint finder’s lookahead, these G-code should come AFTER any homing code to avoid possible collisions.
Include a ;LAYER:1
after these G0
travel commands so that the footprint finder ends due to encountering a layer marker for a layer > 0.
Start G-code workaround example for PrusaSlicer:
G280 S1 ; Ultimaker 3 and S-line home and bed probe without prime blob
; Stop bed probe area computation early and use the entire bed.
G0 X{print_bed_min[0]} Y{print_bed_max[1]}
G0 X{print_bed_min[0]} Y{print_bed_min[1]}
G0 X{print_bed_max[0]} Y{print_bed_min[1]}
G0 X{print_bed_max[0]} Y{print_bed_max[1]}
;LAYER:1
; Put the rest of your start G-code here
Note: We still need to show a minimum of the actual print bottom footprint coordinates to the footprint finder before it stops or else the footprint finder will return a single prefilled point which is the default nozzle probe position in the back right of the bed and the printer will warn about an unbalanced model.
After I posted this on the Ultimaker forum, Robinmdh from Team Ultimaker mentioned that adding ;PRINT.GROUPS:2
to the header will also skip adaptive bed probing. Setting 2 print groups will indicate additional first layers for other objects printed further in the G-code.
I found another bug in the Ultimaker G-code header parser in the process of creating a PrusaSlicer profile for the Ultimaker S-line printers. The Ultimaker Griffin G-code header must be replicated in order for the Ultimaker S-line printers to recognize G-code files as valid.
When the parser validates a GENERATOR.VERSION
composed only of numbers and periods without a third version component (usually the patch or bugfix number), the UI shows a ER999 - An unspecified error has occurred screen that forces a reboot. The G-code header values and results:
GENERATOR.VERSION:2.7.1 <- good
GENERATOR.VERSION:2.7 <- bad
GENERATOR.VERSION:A.A <- good
ERR - procedureStep:113 - Exception caught from step: CHECK
Traceback (most recent call last):
File "/usr/share/griffin/griffin/printer/procedures/procedureStep.py", line 89, in _run
outcome = self.run()
File "/usr/share/griffin/griffin/printer/procedures/pre_and_post_print/runChecksBeforePrint.py", line 77, in run
results = gcode_metadata.getValidator(c).validate() # type: List[GCodeMetaDataContainerValidator.ValidationResult]
File "/usr/share/griffin/griffin/datatypes/gcodeMetaDataContainerValidator.py", line 44, in validate
self.__validateCuraVersion()
File "/usr/share/griffin/griffin/datatypes/gcodeMetaDataContainerValidator.py", line 93, in __validateCuraVersion
version: str = self.__header_container.getGeneratorVersion()
File "/usr/share/griffin/griffin/datatypes/gcodeMetaDataContainer.py", line 45, in getGeneratorVersion
return self.__metadata[self.__metadata_prefix + "generator"].get("version", "0.0.0").lower()
AttributeError: 'float' object has no attribute 'lower'
CRI - marvinService:127 - Added Fault: <Fault: level=2 code=102 message='Unhandled exception from ProcedureStep CHECK' data='dbus.Dictionary({}, signature=dbus.Signature('sv'))'>
The G-code header validator attempts to get the lowercase version of the generator version
string in the metadata dictionary. The validator expects all the header values be stored as string
when the header is parsed and the float
data type has no lower()
function.
For some reason the version
is stored as a float
instead of a string
in the failing case. I thought the version
may be handled special but all I found was a general insertKeyValuePair()
function that handles all the header data insertion.
The header parser inserts a key value pair into the metadata dictionary in a recursive manner that I simplified below. There is no explicit type conversion or casting.
metadata[key[0]] = value
When the original value
consists of digits and at most 1 decimal, value
’s type is implicitly converted to a float
. When non-digits or multiple decimals are present in value
, it is stored as a string
.
When the version
is later retrieved from the dictionary, it is expected to be a string
and an error occurs otherwise due to the next validation steps expecting a string
and not a float
.
Another header parsing function for the build_plate
type also calls the lower()
method which has the same implicit type issue.
Each header parsing function for a specific field returns the expected type so it’s clear that some of the header values are meant to be saved as numeric data types and not as strings.
A header parsing solution could be checking the type of the stored metadata value and doing an explicit type conversion before doing any processing of the value.
We found at least 2 edge cases that force a soft reset on the Ultimaker S-line 3D printers. The root causes were identified and successful workarounds found.
01MAR24 - I notified Ultimaker of the Bed Leveling Footprint Finder and Gcode header bugs.
05MAR24 - Ultimaker acknowledged the bug and based on a comment on the community forum, it seems that the bug will be fixed in the next firmware release with no date given.
Ultimaker makes a streamlined 3D printer line of dual color 3D FDM printers and in recent years, they have pivoted away from the hobbyist audience to institutional customers. This seems to happen to many 3D printer companies when a “time is more valuable than money” customer base is found.
Ultimaker printers are well built (my UMO+ and UM2+ printers are still going strong!) and have above average reliability. This is partly due to overengineering as well as conservative performance estimates and limited features compared to new 3D printers from Prusa and Bambu. Without incorporating new hardware features that benefit the common user, a company can only float on support contracts for so long until other “newly established” competitors come for a piece of the pie (e.g. Bambu X1E, Prusa Pro).
I don’t even want to think about the rebrand of Makerbot printers to appear as if the Method printer is an adjacent, nonoverlapping product line on par with or remotely related to the Ultimaker S-line printer. Currently, there is no Ultimaker S-line vs Method 3D printer specification comparison on the Ultimaker website post-merger with Makerbot. For 3D printer users who are familiar with the printing capabilities of Method and Ultimaker printers, it feels like there was an extra stock of Method printers to offload after the merger.
Ok, that’s enough criticism. I may be a bit too harsh there but it had to be said from the perspective of a non-institutional 3D printer user, software developer, and Ultimaker fan.
Ultimaker is killing it in the slicing software aspect as the lead developer of the open source Cura slicer software. Cura has consistently incorporated new slicer innovations at the front of the pack competing slicers. Notable examples that come to mind are Ironing, Tree Supports, and (more recently) Organic Tree Supports.
Cura may look the most refined out of all the slicers and that often gives the impression that it’s a walled garden with no customizability. The opposite is actually the case here. Cura exposes more print settings than others which empowers the user to get a better print!
Similar to iOS and Android, CuraEngine and Slic3r based slicers (PrusaSlicer and Bambu Studio) have kept abreast with each other in feature parity over the years. The formerly formidable commercial competitor, Simplify3D, is basically dead at this point. The open source nature of both projects allows developers from both projects to freely borrow good ideas from the other project so the 3D printing community grows as a whole.
I look forward to seeing Ultimaker’s innovations that improve the 3D printing industry and community in the future!
]]>TLDR: Use the Blender 3MF Format addon branch with color group support to export models with color data in metadata supported by Printables.
One of the unique features of Printables is support for colors in each listing’s 3D model preview. At first glance, the preview is similar to other 3D model sites where it shows a 3D object on the screen that you can rotate and show a wireframe mode. The color support is not apparent until you upload a 3D model in a format that has colors embedded.
The Fluffcorn model with no color data appears as a single color while multicolor Ninja Pot model shows in multiple colors specified by the creator.
If you try to export a model to a 3MF file that has multiple objects, all the objects will show up in the same color in Printables 3D model preview by default. This is because the model preview appears to only supports color through the Color Groups element of the 3MF specification.
Initially, it was not clear that the 3D model preview supported colors at all. If you do not embed colors in a supported method, the preview will just show a single object in a single color even though the shown object is made up of multiple objects. It was hard to find existing 3D models that have multicolor previews. I asked the question about multicolors on the Printables Prusa group and Ondřej show me an existing Ninja Pot model with colors embedded as a template. Upon further discussion, it was revealed that colors could be assigned within Prusaslicer and Microsoft 3D Builder and exported as multiple objects in a single 3MF file.
It’s all well and good that colors can be assigned in the 3D slicer software and editor but I want to know what part of the 3MF file is used in the Printables 3D model preview for color info.
I automate the export of larger 3MF files in Blender using Python using the excellent Blender 3MF Format addon created by GhostKeeper. 3MF only supports zip as the compression method at the moment so compression and extraction is single threaded and slow, often taking minutes to hours — hence the automation. For scale, some of my larger 3D topo map models are so detailed (looking at you, Alaska 🔨) that I found the Blender vertex limit which is a known bug.
I use Cura to slice models to print on my modified Ultimaker DXU with multiple nozzles
Anyways, my multicolor 3MF files were not showing up with any assigned colors in the online model preview which meant that an image that I rendered in Blender was rendered as a solid orange block on Printables. Users who used the Printables preview tool were not able to distinguish between different objects in the model.
To find out how Printables supports colors, I downloaded the Ninja Pot model that has multiple objects with each objects assigned a different color. I also assigned colors to my 3D model of the District of Columbia (DC) in Blender as “materials” and exported it as a 3MF.
The 3MF file is a zip archive of multiple other files that contain the 3D model and metadata. The 3D model is stored in XML format in the 3dmodel.model
file under the 3D/
directory. I compared the 3dmodel.model
for the Ninja Pot (exported from Prusaslicer) and DC (exported from Blender).
The Blender 3MF Format addon assigns an object’s Blender “material” as the Base Material attribute under the 3MF Core Spec.
A sample of 3dmodel.model
file for DC exported from Blender:
<?xml version='1.0' encoding='UTF-8'?>
<model xmlns="http://schemas.microsoft.com/3dmanufacturing/core/2015/02">
<metadata name="Title" preserve="1" type="xs:string">Scene</metadata>
<resources>
<basematerials id="1">
<base name="Material.001" displaycolor="#02CC00" />
<base name="Material.002" displaycolor="#0002CC" />
</basematerials>
<object id="2" name="DC-dual-land-elevation" pid="1" pindex="0">
...
</object>
</resources>
...
</model>
The XML namespace for the 3MF Core Specification is specified. This is required. Without the namespace, 3MF file will seen as invalid by 3MF programs. The actual namespace URL to http://schemas.microsoft.com/3dmanufacturing/core/2015/02
is no longer valid, so this seems to just be a hardcoded value that 3MF programs will look for now.
Under resources, we have a basematerials
element that contains two child base
elements. Each base
element has attributes of name
and displaycolor
that describe our 2 defined colors with human readable and sRGB values. The meanings should be obvious once you see them.
Next are the individual objects as object
elements. Each object’s mesh data (vertices) are contained in a separate object
element.
Each object
element has attributes that are defined in the 3MF spec. An abbreviated version of the object attribute spec is below:
Name | Type | Annotation |
---|---|---|
id | ST_ResourceID | Defines the unique identifier for this object. |
name | xs:string | Name of object to improve readability. |
pid | ST_ResourceID | Reference to the property group element with the matching id attribute value (e.g. <basematerials>). It is REQUIRED if pindex is specified. |
pindex | ST_ResourceIndex | References a zero-based index into the properties group specified by pid. This property is used to build the object. |
id
is the object
’s unique identifier that it can be referenced with. name
is a human readable label.
pid
is a reference to the basematerials
element with an id
of 1
pindex
references an index within the element referenced by pid
attribute of the same object
.
The object
of id="2"
gets its properties from the sibling element with an id
equal to the object
’s pid
. This references the basematerials
element with id="1"
. Within the referenced element through pid
, the pindex
child is used for the object
properties.
The 3MF Core Specification states
The displaycolor property is meant to be used for rendering purposes only, and not for defining the actual material color of an object.
The statement suggests that basematerials/base
elements are meant to describe the digital render color and fits the digital 3D model preview use case.
However, Printables 3D model preview does not seem to use the basematerials
color to determine the render color.
I viewed the 3dmodel.model
file for the Ninja Pot model and have included an modified sample below that is in the context of the DC model for easier comparison.
<?xml version='1.0' encoding='UTF-8'?>
<model xmlns="http://schemas.microsoft.com/3dmanufacturing/core/2015/02" xmlns:m="http://schemas.microsoft.com/3dmanufacturing/material/2015/02">
<metadata name="Title" preserve="1" type="xs:string">Scene</metadata>
<resources>
<m:colorgroup id="1">
<m:color name="Material.001" color="#02CC00" />
<m:color name="Material.002" color="#0002CC" />
</m:colorgroup>
<object id="2" name="DC-dual-land-elevation" pid="1" pindex="0">
...
</object>
</resources>
...
</model>
At the top, the model
element has an additional namespace prefixed with m
. This namespace is the 3MF Materials Extension Specification. Note that the core spec namespace remains.
m:colorgroup
and m:color
have replaced basematerial
and base
. The displaycolor
property is now the similar color
property under the m:color
element.
The 3MF Materials Extension Spec states
A
<colorgroup>
describes a set of surface color properties and SHOULD NOT reference translucent display properties.
Colors [elements] are used to represent rich color, specifically what most 3D formats call “vertex colors”. These elements are used when color is the only property of interest for the material, and a large number will be needed. The format is the same sRGB color as defined in the core 3MF specification.
The object
element references have the same flow and it can be seen that the object
with id="2"
references the properties in the element <m:color name="Material.001" color="#02CC00" />
.
The only significant difference (as far as color is concerned) between Blender and Prusaslicer produced 3MF files is the use of basematerial
vs colorgroup
.
Base Materials describe the actual materials used for manufacturing an object and have a displaycolor
attribute to specifically define the color used for rendering a material. This is the more extensible element of the two and as additional non color data may be added in the future.
Color Groups describe ONLY colors and are used when color is the only property of interest for a material. This is the more restricted of the two and may be used for brevity when many colors are expected.
I added an option to the Blender3MFFormat addon to use Color Groups to describe material colors and use object.name
to keep human readable names when exporting 3MF files.
One of the most annoying issues was dealing with the addition of the 3MF Materials Extension Spec as a second XML namespace. I also learned that Ultimaker Cura does not import the human readable object.name
and sets object.name
to the filename incremented by 1 when exporting as 3MF which is destructive and not user friendly.
If you want to see the actual code addition, you can view the pull request on Github.
If you want to use the updated Blender3MFFormat addon with Color Group support, all you need to do is download my color-groups branch of the Blender addon and copy the folder to your C:\Users\%USER%\AppData\Roaming\Blender Foundation\Blender\X.X\scripts\addons
directory.
Voila! Now you can upload your Blender models with colors to Printables and users can view your models with the colors you want!
If you want to see the color 3D model preview you can try viewing my topographic relief map models on Printables. The 3D viewer downloads the entire model for viewing so I have linked a few of the smaller models below for faster viewing:
After some use, I found that Printables preiew shifts the render color towards orange so your colors will appear off and the wireframe/xray views do not seem to be enabled when viewing a multicolor model. There is also an arbitrary limit on file size/memory/rendering time for generation of a Printables thumbnail so larger models won’t show a color preview until you actually click the preview to load it.
Most of these minor issues are understandable as multicolor previews were not a high use feature in the past.
Recently many reliable and lower cost multicolor consumer 3D printers have been released. Some of these systems include the Prusa XL toolchanger, Prusa MMU3, Bambu Lab AMS, and Ultimaker DXU. I was drafting a separate post on multimaterial 3D printing solution in the past but new 3D printers changed the landscape + I ran out of time so I will just note here that a multiple filament to single nozzle system such as the Prusa MMU and Bambu AMS pass all filament through the same path, so purges and material contamination are unavoidable and can only be minimized. If you print in different plastics that do not mix well, the print may suffer in adhesion or strength as bits of the previous material will be mixed with the new material.
A multi-nozzle solution most often seen on the Prusa XL and Ultimaker printers avoids contamination and wasteful color swap procedures. Priming an unused nozzle is still needed but the lengthy purging procedure is not needed.
With more people doing multicolor prints at home with a 3D printer working out of the box, I expect that we will see more 3D models shared as 3MF with assigned colors. If you want to read more about why 3MF is replacing STL, see the 3D map printing FAQ.
Printables’s color 3D model preview shows the high attention to detail of the team at Prusa Research. Even though few creators may publish multicolor designs, users who do publish multicolor models create a more accessible experience for users and are rewarded with a nice 3D model preview!
]]>The SUNLU T3 uses a modified version of the BTT SKR Mini E3 V2 board with TMC2209 stepper drivers in standalone mode. This means that the stepper drivers are limited as a drop in replacement for the A4988 driver and there is no runtime UART or SPI configuration communication beween the controller board and the stepper drivers.
While creating the configuration, I fixed the fan assignment for the extruder cooling fan. The extruder cooling fan pin which was previously mismapped as the controller board cooling fan. With stock SUNLU T3 firmware the extruder cooling fan would only turn on when the stepper motors were active and not necessarily when the extruder was hot. If you were heating the hotend for a filament change or cleaning without moving motors the extruder was not being cooled.
The actual controller board cooling blower fan located underneath the printer is hardwired to the power input so it is always on.
I also enabled PID temperature support for the bed heating and it works fine with the autotuned values. Not sure why SUNLU did not enable it in their firmware.
A picture of the SUNLU T3 bottom with the cover removed is below.
]]>When booking with an OTA, you should always use a credit card instead of a debit card. Using an OTA involves a gamble where you try to pick a reputable OTA that takes your money and fails to make a booking for you. Or the OTA passes your payment information to another OTA under the table to fraudently charge your payment.
SuperTravel is a budget Online Travel Agency (OTA) that is listed as a booking site on Google result pages under hotel knowledge panels. I have never heard about SuperTravel, but figured it was reputable enough to take the chance on since it showed up as a choice in the official Google knowledge panel for a hotel.
I booked a $150 lodging for about $100 via SuperTravel after clicking the SuperTravel booking link on a Google knowledge panel for a hotel. Getting $50 off a one-off booking at a hotel that I don’t have a loyalty account with sounded like a good deal.
Two issues happened with SuperTravel:
SuperTravel fraudulently passed my contact and payment data to Booking.com to create a random second booking with Booking.com without any action on my part.
The hotel reservation made through SuperTravel was not honored by the hotel listed on their website. This meant I had to seek a refund for the fake SuperTravel reservation as well as pay extra for last minute alternative lodging.
The first sign of trouble was right after I paid for the booking on SuperTravel’s website. I received 2 hotel confirmation emails.
The first email confirmed my original booking through SuperTravel. Great.
The second email confirmed a booking at a different random hotel through Booking.com. What?
I had only used SuperTravel’s website and haven’t visited Booking.com nor viewed the second booking hotel on SuperTravel.
The Booking.com booking was made with the same email and payment information used for the SuperTravel booking. I had used a Gmail email address with a plus sign after it to distinguish it from my base email address. Booking.com shouldn’t have this unique email address I provided to SuperTravel. Additionally Booking.com shouldn’t have the new credit card info I used to pay SuperTravel. The Sender field in the Booking.com email appears as Budget Inn Motel which is the random hotel that the second confirmation was made at but the personalized email was actually sent from the Booking.com adress of customer.service@booking.com.
The innocent explanation is that SuperTravel has a bug somewhere in their software that used my information to pay for a new nonrefundable booking through a referral partnership with Booking.com.
When I arrived at the hotel that I had booked through SuperTravel, the hotel had no reservation on file. When I mentioned that I had booked through an online travel agency, the hotel hostess guessed that I had used SuperTravel before I mentioned the OTA. I was informed by the hotel that they do not honor SuperTravel reservations and they had informed SuperTravel multiple times in the past. I’m not sure how bad an OTA needs to do be blacklisted by a vendor but I guess this is it. I ended up paying additional out of pocket costs for alternative lodging that night.
Upon calling SuperTravel and explaining the situation, the out sourced representative asked me to email SuperTravel support. He did not understand the issue with SuperTravel using my information to book with Booking.com and repeatedly stated that Booking.com was a SuperTravel partner.
In my email to SuperTravel support, I requested a refund for the original reservation as well as payment for the price difference needed to make last minute lodging arrangements. SuperTravel cancelled the original transaction, but I haven’t heard back from SuperTravel on getting compensation for the additional lodging cost.
I also called Booking.com to cancel the fraudulent booking created by SuperTravel. The representative told me that it was up to the hotel whether or not to process the refund despite acknowledging that the booking was made without my action. I had to initiate a chargeback on the Booking.com reservation with my credit card provider.
You should always use a credit card versus a debit card when booking through an OTA to protect yourself from insufficient funds and fees. It should go without saying that you must possess the funds to pay off the lodging costs paid for with the credit card before the end of the billing period to avoid interest fees. If you had used a debit card to book with SuperTravel, both the SuperTravel and Booking.com lodging costs would be taken out of my bank account which could potentially lead to insufficient funds to pay for alternative lodging and high overdraft fees. The combined cost of the two fake lodging reservations would be unavailable for use until the bank resolves the dispute in your favor which could take weeks. The usage of a credit card allows you some additional time to dispute the charges and not have your funds locked up instantly.
]]>A common filament runout sensor design sold online has 3 pins but their functions are not always labeled. The sensor has a 3 pin JST-XH 2.5mm header that you would normally connect to an identical 3 pin port on your printer controller board. It’s plug and play if your controller board was designed to accomodate filament runout sensors with the same 3 pin header.
The Ultimaker Ultimainboard controller board used in the Original+ and 2/2+ does not have a labeled 3 pin JST-XH header for filament runout sensors (although it does have a 3 pin JST-XH header for an analog sensor). Instead the unused pins on the board are exposed as breakout 2.54mm headers on the right side of the board.
There are some conflicting resources online as to correct pin out for these generic filament runout sensors so I ended up tracing the PCB within the sensor and reverse engineered the pinout of the sensor to be +5V (VCC), GND, Signal (SIG) ordered left to right when the header is latch side up facing towards you.
The 5V pin provides a voltage to the sensor. Normally the physical switch within the sensor is disconnected and the 5V pin continually sinks to GND. Controller boards with a dedicated sensor pin will have a resistor on the 5V or GND and SIG pins to prevent over current or excessive power drain. If you use general purpose input/output pins on your board to read the filament sensor, you will need to put a resistor of high value such as 47k on the 5V line to prevent shorting the controller 5V power or sensor pin.
When a piece filament is loaded into the sensor, it runs over and presses the switch contact down. The 5V pin is connected to the SIG pin and disconnected from GND.
When filament has run out of the sensor, it no longer holds the switch contact down and the switch contact spings up, returning to its original state. The 5V pin is disconnected from SIG and current is sinked to GND again.
SIG will read LOW when filament is not present. SIG will read HIGH when filament is present.
There are some 0 Ohm resistors on the board that made reverse engineering tricky but I think the 0 Ohm components were for manufacturing uniformity and rather than obsfucation.
The Ultimaker Ultimainboard’s built in 3 pin analog sensor header has the pin ordering of SIG, +5V, GND. Only the SIG pin on the analog sensor header has a 1k resistor and the +5V and GND pins have no resistors to limit current. If you only need one filament sensor, you could swap the pins in the cable between the sensor and analog header to match up the +5V and SIG pins. Due to no resistor built into the board on GND, you need to either add a resistor inline to GND or +5V to prevent a short or leave GND disconnected.
I have dual extruders on my Ultimaker and wanted to utilize the unused expansion pins on the Ultimainboard to read from 2 filament sensors. The below adapter board schematic and layout reads from 2 filament sensors using the least amount of adjacent pins on the Ultimainboard v2.
I tested this design with the below configuration using the first top 2 pins on the J25 header: ADC0 (54) and ADC1 (55). ADC0 and ADC1 are mistakenly labeled as ADC1 and ADC2 respectively in the UltiMainboard diagram above. The 5V and GND can be supplied from either J26 or J22.
For a more seamless cable setup with a single plug in point on the board, you could use 4 pins in a row on J22 to get 5V, GND, and use the TxD2 and RxD2 pins to read the sensors. You would lose the extra serial port capability.
The Marlin filament runout configuration is
#define NUM_RUNOUT_SENSORS 2
#define FIL_RUNOUT_PIN 54 // ADC0
#define FIL_RUNOUT2_PIN 55 // ADC1
#define FIL_RUNOUT_STATE LOW
I recommend tweaking the rest of the filament configuration values to match your extruder setup and filament path lengths.
#define FILAMENT_RUNOUT_DISTANCE_MM 35
#define FILAMENT_RUNOUT_SCRIPT "M600 T%c U-20"
FILAMENT_RUNOUT_DISTANCE_MM
should be large enough to allow the filament to clear the runout sensor output hole after clearing the switch contact. The switch used in the sensor only allows the filament to slide through freely from the input hole. Once the sensor switch contact springs up from no more filament passing over it, the extruder will likely destroy the switch if filament is moved backwards and get stuck underneath the switch contact’s bottom side. The runout distance should be not be too long that the filament runs past the extruder gear and looses traction.
FILAMENT_RUNOUT_SCRIPT
U-XX
value represents the unload length of the filament after the runout distance is exhausted. It should retract the filament back out the extruder input so you can reach it to pull it out.
The SUNLU T3 3D printer comes with a Fast Print mode that claims to speed up printing by 3x and deliver 250mm/s print speed. There is no explanation from the manufacturer what Fast Print actually changes to deliver faster print times. We can check in the released SUNLU T3 source code. The T3 firmware is a variant of Marlin firmware which is licensed under GPL and SUNLU’s release of their changes to Marlin complies with GPL which is great!
By browsing the printer menu displayed on the LCD, we can read the Fast Print mode displayed as “Fast Print”. Search the human readable string “fast print” inside the T3 Marlin
source code folder to find the below definition of MSG_PRINT_CONFIG
set to the localized English string "Fast Print"
in language_en.h
.
PROGMEM Language_Str MSG_PRINT_CONFIG = _UxGT("Fast Print");
It’s surprising that SUNLU used the variable name MSG_PRINT_CONFIG
instead of something more descriptive like MSG_FAST_PRINT
. We find that MSG_PRINT_CONFIG
is referenced within menu_main()
loop in the file menu_main.cpp
in the below code block .
void menu_main() {
...
#if ENABLED(LCD_INFO_MENU)
SUBMENU(MSG_INFO_MENU, menu_info);
#ifndef Internal_Version
EDIT_ITEM(bool, MSG_PRINT_CONFIG, &fast_print_enable);
#endif
if(fast_print_enable){
if(fast_print_enable^fast_print_enable_pre)
{
planner.settings.acceleration=800;
planner.settings.max_acceleration_mm_per_s2[X_AXIS]=planner.settings.max_acceleration_mm_per_s2[Y_AXIS]=planner.settings.max_acceleration_mm_per_s2[Z_AXIS]=1500;
planner.settings.max_acceleration_mm_per_s2[E_AXIS] = 2000;
planner.settings.retract_acceleration = 1000;
planner.settings.travel_acceleration = 1000;
fast_print_enable_change=true;
//do{ Serial.print("mmmcurrent_position.e:"); Serial.println(current_position.e); }while(0);
}//
//
}
else{
if(fast_print_enable^fast_print_enable_pre)
{
planner.settings.acceleration=acceleration_Back;
fast_print_enable_change=true;
//do{ Serial.print("mmm1current_position.e:"); Serial.println(current_position.e); }while(0);
}
//
}
fast_print_enable_pre=fast_print_enable;
#endif
...
}
We can clean the code up to be more readable below
void menu_main() {
...
#if ENABLED(LCD_INFO_MENU)
SUBMENU(MSG_INFO_MENU, menu_info);
#ifndef Internal_Version
EDIT_ITEM(bool, MSG_PRINT_CONFIG, &fast_print_enable);
#endif
if (fast_print_enable) {
if (!fast_print_enable_pre) {
planner.settings.acceleration=800;
planner.settings.max_acceleration_mm_per_s2[X_AXIS]=planner.settings.max_acceleration_mm_per_s2[Y_AXIS]=planner.settings.max_acceleration_mm_per_s2[Z_AXIS]=1500;
planner.settings.max_acceleration_mm_per_s2[E_AXIS] = 2000;
planner.settings.retract_acceleration = 1000;
planner.settings.travel_acceleration = 1000;
fast_print_enable_change=true;
}
} else {
if (fast_print_enable_pre) {
planner.settings.acceleration=acceleration_Back;
fast_print_enable_change=true;
}
}
fast_print_enable_pre=fast_print_enable;
#endif
...
}
The current state of Fast Print is stored in a global variable named fast_print_enable
that is declared below in planner.h
.
bool fast_print_enable=false;
The Fast Print state variable type, string, and pointer are passed to the EDIT_ITEM()
macro in menu_main()
.
The code displays an “Edit Item” under the main menu which displays the localized string for Fast Print mode. The value of fast_print_enable
is printed on the right side of this menu item.
When the Fast Print Edit Item is clicked by the user, fast_print_enable
is toggled as part of the Edit Item’s action()
in menu_item.h
in the snippet below where ptr
is the pointer to fast_print_enable
.
*ptr ^= true;
The XOR operation between *ptr
and true
toggles the boolean value at ptr
address to have the same effect on fast_print_enable
as below:
fast_print_enable = !fast_print_enable;
The previous enabled status of Fast Print mode is kept track of to avoid redundant execution of code to enable/disable Fast Print when the menu is redrawn but Fast Print mode status has not changed. When fast_print_enable
== fast_print_enable_pre
nothing no additinal operations are executed.
This previous value is stored in fast_print_enable_pre
at the end of the main_menu()
loop.
//Enable Fast Print mode
planner.settings.acceleration=800;
planner.settings.max_acceleration_mm_per_s2[X_AXIS]=planner.settings.max_acceleration_mm_per_s2[Y_AXIS]=planner.settings.max_acceleration_mm_per_s2[Z_AXIS]=1500;
planner.settings.max_acceleration_mm_per_s2[E_AXIS] = 2000;
planner.settings.retract_acceleration = 1000;
planner.settings.travel_acceleration = 1000;
fast_print_enable_change=true;
When Fast Print mode is initially enabled, printer acceleration values are set to high values.
fast_print_enable_change
set to true seems to serve a similar purpose as fast_print_enable_pre
as a marker that Fast Print mode has changed but for Fast Print code scattered elsewhere in the code base.
// planner.settings.axis_steps_per_mm[E_AXIS]= pgm_read_float(&_DASU[ALIM(E_AXIS, _DASU)]) * FAST_FEED; // 1.1;
position.e=current_position.e*settings.axis_steps_per_mm[E_AXIS];
// do{ Serial.print("ffffposition.e:"); Serial.println(position.e); }while(0);
// do{ Serial.print("ffffcurrent_position.e:"); Serial.println(current_position.e); }while(0);
The above is code from an added debug/testing function Planner::change_e_stepper_mm()
that overwrites an existing position.e
variable to current_position.e*settings.axis_steps_per_mm[E_AXIS]
to store the current extruder position in step count. After the calculation are some Serial.print
statements that should let the developers verify the extruder position in steps. Planner::change_e_stepper_mm()
is called near the end of Planner::_populate_block()
.
current_position
appears to be used for actual movement positioning and position
is modified elsewhere for calculations so modifying position
in Planner::change_e_stepper_mm()
may not affect the actual printing since this modification is near the end of Planner::_populate_block()
? I haven’t dug into the planner logic so I’m unsure.
void PrintJobRecovery::resume() {
...
planner.resume_e_stepper_mm(info.current_position.e);
do {
Serial.print("mmmcurrent_position.e:");
Serial.println(current_position.e);
} while(0);
do{
Serial.print("info.current_position.e.e:");
Serial.println(info.current_position.e);
} while(0);
//sprintf_P(cmd, PSTR("G92.9E%s"), dtostrf(info.current_position.e, 1, 3, str_1));
//gcode.process_subcommands_now(cmd);
...
}
There is an equivalent new function Planner::resume_e_stepper_mm()
that is called by the above code in PrintJobRecovery::resume()
from powerloss.cpp
to restore a position. The default Marlin implementation code that restores a position with G92
G-code command is commented out so the Planner::resume_e_stepper_mm()
appears to set the software position of extruder like G92
. I’m not sure why the default Marlin implementation wasn’t usedinstead but the Serial.print()
lines suggest that this change was made for debug logging purposes.
//Disable Fast Print mode
planner.settings.acceleration=acceleration_Back;
fast_print_enable_change=true;
When Fast Print mode is disabled, only the printing acceleration value is reset. The axis maximum, retract, and travel accelerations do not seem to be reset.
fast_print_enable_change
extern bool fast_print_enable;
static float dis_count = 0.0;
static float dis_count_x = 0.0;
static float dis_count_y = 0.0;
static float dis_count_e = 0.0;
float acceleration_Back = 0.0;
static float free_speed_back = 35.0;
void Adjust_Print_Speed() {
#if 1
if (!fast_print_enable) return;
static bool readly_into_wallout = true;
if (destination[Z_AXIS] < 0.5) {
acceleration_Back = planner.settings.acceleration = 800; //1000;//800;
free_speed_back = feedrate_mm_s = 30; //60;//70;//35;
planner.settings.max_acceleration_mm_per_s2[X_AXIS] = planner.settings.max_acceleration_mm_per_s2[Y_AXIS] = planner.settings.max_acceleration_mm_per_s2[Z_AXIS] = 1500;
planner.settings.max_acceleration_mm_per_s2[E_AXIS] = 2000;
planner.settings.retract_acceleration = 1000;
planner.settings.travel_acceleration = 1000;
}
if (readly_into_wallout && fast_print_enable && strchr(GCodeParser::command_ptr, 'W')) {
if (destination[Z_AXIS] > 0.5) {
readly_into_wallout = false;
dis_count_x = abs(max_point[X_AXIS] - min_point[X_AXIS]);
dis_count_y = abs(max_point[Y_AXIS] - min_point[Y_AXIS]);
dis_count = dis_count_x > dis_count_y ? dis_count_x : dis_count_y;
if (dis_count < 11) {
acceleration_Back = planner.settings.acceleration = 1000; //800;
free_speed_back = feedrate_mm_s = 15 + 10; //20;//10;
} else if (dis_count < 16) {
acceleration_Back = planner.settings.acceleration = 1000; //800;
free_speed_back = feedrate_mm_s = 20 + 10; //24;//12;
} else if (dis_count < 21) {
acceleration_Back = planner.settings.acceleration = 1000; //800;
free_speed_back = feedrate_mm_s = 28 + 10; //32;//16;
} else if (dis_count < 50) {
acceleration_Back = planner.settings.acceleration = 1000; //800;
free_speed_back = feedrate_mm_s = 70 + 10; //86;//43;
} else {
acceleration_Back = planner.settings.acceleration = 1400; //1200;
free_speed_back = feedrate_mm_s = 70 + 10; //90;//45;
}
max_point[Y_AXIS] = max_point[X_AXIS] = -999.0;
min_point[Y_AXIS] = min_point[X_AXIS] = 999.0;
}
} else if (!readly_into_wallout && fast_print_enable && strchr(GCodeParser::command_ptr, 'Q')) {
if (destination[Z_AXIS] > 0.5) {
readly_into_wallout = true;
acceleration_Back = planner.settings.acceleration = 1000; //800;
free_speed_back = feedrate_mm_s = 180 + 10; //230;//115;
}
}
if (!readly_into_wallout) {
max_point[Y_AXIS] = max_point[Y_AXIS] > destination[Y_AXIS] ? max_point[Y_AXIS] : destination[Y_AXIS];
max_point[X_AXIS] = max_point[X_AXIS] > destination[X_AXIS] ? max_point[X_AXIS] : destination[X_AXIS];
min_point[Y_AXIS] = min_point[Y_AXIS] < destination[Y_AXIS] ? min_point[Y_AXIS] : destination[Y_AXIS];
min_point[X_AXIS] = min_point[X_AXIS] < destination[X_AXIS] ? min_point[X_AXIS] : destination[X_AXIS];
}
#endif
}
When running printing Gcode movements like G0/G1, a new function Adjust_Print_Speed()
has some disabled code that increases acceleration and feedrate based on the Z height. The code seemed to be disabled because it is only run when the Gcode command is a W
or Q
letter command - none of which are generated by any FDM slicer that I know of. SUNLU distributes a copy of Cura with their printer profiles built in but it does not generate Gcode with W and Q commands. The disabled code and macros like Internal_Version
and HEAT_PIPES_60_W
scattered around suggest that SUNLU was developing an adaptive Fast Print mode and experimenting with a higher powered heater.
I didn’t see any setting of velocity to 250mm/s despite the mention of 250mm/s print speeds in SUNLU’s Fast Print marketing image.
Fast Print mode’s acceleration values look similar to the high end of user reported accleration values in fast Ender 3 V2 printer settings found online. Which makes sense - the T3 is an Ender 3 V2 clone after all.
Without spending more time analyzing every custom code that SUNLU has added to the T3 version of Marlin firmware, Fast Print mode is a straight forward accleration increase feature that 3D printer users can replicate on their own without using SUNLU’s variant of Marlin.
In future when I have access to the actual T3 printer currently stored in another location, I’ll create a Marlin configuration for it based on the current Marlin code branch. This will allow the printer to take advantage of newer features like Linear Advance, S curve, and Input Shaping as well as fixing some bugs in the T3 firmware (ex: not properly stopping an SD print over the serial connection).
]]>Download the latest firmware (v3.40 24OCT22 at time of writing) from http://3dsunlu.com/Content/2169603.html.
Open the Readme file in Notepad and follow the directions listed. If there is just one firmware file for your language, follow the Readme steps for the first firmware file.
Set the Z-offset from the printhead BLTouch autoleveling sensor to the bed under Motion > Level > Z-Offset. When selecting the Z-Offset menu item, the printer will home/zero X,Y,Z directions and then position the printhead at the center of the bed for you to electronically adjust the Z-offset.
You can use a piece of paper to determine a Z-offset with moderate resistance when pulling the paper between the nozzle and the bed.
Afterwards, level the bed corners through Motion > Level > Manual Level (Bed Tramming). The nozzle will move to each of the bed corners and you can physically adjust the bed springs at each corner to get the same height across all corners using the paper method above.
Jam detection does not work correctly in firmware v3.40. It will detect a jam when there is no jam leading to nozzle being automatically parked and filament purged.
Jam Detection can be turned off in Configuration > Jam > OFF
Power Loss Detection can be turned off in Configuration > Power Loss > OFF
If you leveled your bed manually, ABL may not be necessary for all prints. Turning ABL off on a manually leveled bed can save some time by skipping the 4x4 mesh leveling detection at the beginning of a print.
Turn ABL off in Configuration > AUX Level > OFF
The printer’s preset acceleration is fairly low and can be increased. There is no need for Fast Mode if your acceleration and speed is tuned.
Change acceleration values in Configuration > Advanced Settings > Acceleration
Field | Value |
---|---|
Accel | 800 |
A-Retract | 1000 |
A-Travel | 800 |
A-Travel | 800 |
Amax X | 1500 |
Amax Y | 1500 |
Amax Z | 100 |
Amax E | 5000 |
See previous note on Acceleration.
The automatic filament loading under Change Filament does not have the correct bowden tube length calculated. It will continuously extrude filament without stopping. Do not use the automatic filament loading.
The bowden tube length is used during the filament change procedure for runout detection so Runout Detection should be turned off in Configuration > Runout Detection > OFF.
SUNLU has the SUNLU T3 configuration files available for download at http://3dsunlu.com/Content/2169603.html listed as Mac Cura configuration.
The downloaded Cura printer profile files can be copied into your Cura configuration directory on Mac or Windows.
When you launch Cura, you can add the preset T3 printer by selecting it under the non-networked printer list. If you imported the T3 printer profile correctly, the printer settings should look similar to below.
The filament diameter under Extruder 1 should be set at 1.75 mm and not 2.85 mm.
SUNLU’s provided Start G-code heats up the extruder before heating the bed. The bed takes the longer of the two to heat up so it should be heated first to avoid oozing or burning heated filament from a hot nozzle. Homing Z axis should be done with a heated bed to compensate for any expansion but not a heated nozzle to avoid a hot nozzle from damaging the plastic bed.
SUNLU’s provided End G-code has the wrong Y axis value of 270 mm for the final position. The T3 Y axis has a maximum value of 220 mm so the printer crashes the bed into the Y axis limit trying to move to 270 mm.
The corrected Start G-code:
G21 ;metric values
G90 ;absolute positioning
M107 ;start with the fan off
G1 F2400 Z15.0 ;raise the nozzle 15mm
M190 S{material_bed_temperature}; Wait for bed temperature to reach target temp
G28 ;home all
T0 ;Switch to Extruder 1
M109 S{material_print_temperature} ;Set Extruder Temperature and Wait
G1 F3000 X5 Y10 Z0.2 ;move to prime start position
G92 E0 ;reset extrusion distance
G1 F600 X160 E15 ;prime nozzle in a line
G1 F5000 X180 ;quick wipe
G92 E0 ;reset extrusion distance
The corrected End G-code:
G91 ;Relative positioning
G1 E-2 F2700 ;Retract a bit
G1 E-2 Z0.2 F2400 ;Retract and raise Z
G1 X5 Y5 F3000 ;Wipe out
G1 Z10 ;Raise Z more
G90 ;Absolute positioning
G1 X0 Y{machine_depth} ;Present print
M106 S0 ;Turn-off fan
M104 S0 ;Turn-off hotend
M140 S0 ;Turn-off bed
M84 X Y E ;Disable all steppers but Z
Download and import my optimized T3 Cura print settings profile here.
Changes with the greatest impact are layer height, ironing, and pattern:
Setting | Value |
---|---|
Layer Height | 0.12 |
First Layer Height | 0.24 |
Ironing Acceleration | 700 mm/s² |
Ironing Flow | 20% |
Ironing Inset | 0.3 mm |
Ironing Line Spacing | 0.2 mm |
Ironing Speed | 150 mm/s |
Top/Bottom Pattern | zigzag |
The T3 printer seems to be based on Marlin firmware, a project under the GNU GPL v3.0 license. GPL v3.0 requires that manufacturers that modify the Marlin firmware release their changes.
SUNLU has not released the T3 printer source code which would include Marlin configuration files.
When/if the Marlin configuration files are released, you will be able to use improvements and features from the updated Marlin firmware on the T3 printer.
The T3 is very similar to the Ender 3 Pro, I haven’t looked underneath at the electronics but I wouldn’t be surprised if it’s a clone of the Ender 3 Pro and the Ender 3 Pro Marlin configuration could be used with some modifications.
]]>The Ford Bronco Sport is unofficial spiritual successor to the sporty design of the 1st and 2nd generation Ford Escape. Ford gradually rounded the vehicle bodies in the 3rd and 4th generation of the Escape to create room in the lineup for this Bronco Sport. I really wanted to like the Bronco Sport and was excited to drive what I saw as my future upgrade from the 1st generation Escape Hybrid.
For those in the same figurative or literal vehicle as me, the short answer to how the Bronco Sport compares to the 1st generation Escape Hybrid is
The Bronco Sport does everything the 1st generation Escape Hybrid does — only worse.
This review is biased from my point of view as a happy 1st gen Escape Hybrid driver. I believe the designers of the 1st gen Escape got the size and proportion spot on. You can fit a dryer and washer in perfectly in the back of that Escape and the cabin has enough length to transport a multiple 8ft long 2x4s without needing to hang the wood planks out of a window, among many other things that the Escape performs above average at.
I don’t understand why almost every Bronco Sport review mentions the “ground-breaking” built-in bottle opener in the tailgate. The obligatory bottle opener mention has to be a joke or contractual condition to get a complimentary Bronco Sport review model. Maybe I don’t pop a cold one often enough while sitting in the trunk, but can’t you just put any bottle opener in a side pocket of the vehicle or bring your own opener?
Looking past the inclusion of new driving and safety features that are standard in vehicles manufactured in the past 5 years, the Bronco Sport pales in comparison to the 1st generation Ford Escape Hybrid for a modern car that is ~$30k MSRP. The Bronco Sport sacrifices usability in pursuit of form over function. A couple specifications that you won’t find directly compared elsewhere:
Feature | 2021 Bronco Sport Base | 2007 Escape Hybrid |
---|---|---|
Length | 172” | 174” |
Ground clearance | 7.8” | 8.5” |
Turn radius | 18.7’ | 18.35’ |
Trunk space | 32.5 cuft | 27.6 cuft |
Trunk space (rear seats folded) | 65.2 cuft | 65.5 cuft |
Fuel economy (highway 60-80 MPH on I-95) | 30-32 MPG (1.5L 3 cylinder) | 26-32 MPG (electric, 4 cylinder) |
Fuel economy (city, low traffic) | 22-30 MPG? (did not test enough) | 28-35 MPG |
Transmission | ✔️ (8 speed) | ✔️ (eCVT) |
Auto lights | ✔️ | ✔️ |
Front view space over hood | ❌ (bronco sport hood has giant useless ridges and blocks view of critical corner areas) | ✅ |
Side view side for turning | ✔️ (rear right window is smaller) | ✅ |
Rear view window | ❌ (rear window glass is larger but viewport opening is tiny, why even have window glass that spans side to side when it covers so much plastic?) | ✅ |
Rear window glass opens | ✔️ (motor housing obstructively mounted to the bottom of window opening = less space in the window opening) | ✅ |
Rear tailgate mounted dome lights | ✅ (activated by side mounted button in the trunk) | ❌ |
Rear seat folding | ❌ (seats do NOT fold flat, level with the trunk floor, rear leg area is wasted) | ✅ |
In-trunk clips / covers | ✔️ (not sure how long the plastic mounting clips would last with daily use) | ✔️ |
Analog speedometer and tachometer | ✔️ | ✔️ |
Physical climate controls | ✔️ | ✔️ |
Adaptive (radar) cruise control | ❌ (none on Base) | ❌ |
Lane keeping assist (LKA) | ✅ (conveniently adjustable with a button on the blinker knob) | ❌ |
Forward collision detection | ✅ | ❌ |
Rear backup camera | ✅ | ❌ |
Rear backup object detection | ❌ (does not beep or detect when you are physically close to an object on Base model) | ✅ |
Remote start | ❌ (none on Base model) | ❌ |
OEM Apple CarPlay / Android Auto, Bluetooth, USB ports | ✅ | ❌ |
Driver seat adjustment | ✔️ (manual) | ✔️ (power) |
Front seat back storage pockets | ❌ (2 inch deep side pocket on the driver seat that barely holds an ID card) | ✅ |
Side door storage space | ✅ (fits a hydroflask) | ✔️ |
All wheel drive (AWD) (nonlocking differential) | ✔️ | ✔️ |
Legend Symbol | Meaning |
---|---|
✅ | Better |
✔️ | Exists, not better, not much worse |
❌ | Not available / Much worse (if in direct qualitative comparison) |
I missed many practical features from the 1st gen Escape Hybrid in the Bronco Sport Base. Lack of seat pockets, rear seats that fold level with the trunk, and bottom mounted rear wiper motor are just a few examples that I noticed in my short time with the Bronco Sport.
Front visibility is made drastically worse by a flat hood angle with sporty ridges on it that seem to serve no practical purpose. The flat hood angle and sharp edge at the front of the hood obstruct view of precious space in front of the driver. I didn’t measure the decreased viewing distance, but it feels like I lost about 3-5 ft of viewing distance from the front of the car compared to the gently sloping hood of the 1st gen Escape. The ridges do not accurately signal where the front corners and wheels are and are much more pronounced than they need to be. The best way to describe how bad they block the view is to put a 2x2 on each side of your hood. A subtle depression or raised edge on each side could signal just as effectively without getting in the way.
The Bronco Sport rear visibility is reduced compared to the Escape, I’ve already described why in the direct comparison above.
The 1.5L 3 cylinder engine equipped in the Bronco Sport Base sips fuel on the highway, maintaining low sounds and 1500-2000 RPMs even at 80 MPH. The engine and 8 speed transmission are much rougher than the Escape Hybrid’s eCVT and all electric drive at low speeds. Despite the current generation hybrid and plug-in EV trim Escape and Maverick using a similar frame and powertrain to the Bronco Sport, there is no hybrid or EV model for the Bronco or Bronco Sport in 2022. There are some reports that a hybrid/EV Bronco/Bronco Sport may be released in 2024. That would drop at the same time or after the next generation Toyota 4Runner and Jeep Recon which are expected to have hybrid or EV options. As a hybrid vehicle owner I am also interested in the 4Runner and Recon as possible upgrades to my 2007 Escape Hybrid.
To get the more powerful and possibly more reliable 4 cylinder engine in the Bronco Sport, one needs to choose the more expensive Badlands trim which includes a the 4 cylinder engine and a differential which is capable of simulating some level of locking.
Many people hate on CVTs, but it’s a matter of the right tool for the job. Ford got the CVT right in the 1st gen Escape Hybrid. The chunky eCVT in my 2007 Escape Hybrid has been driven over 230k miles including towing lightly loaded trailers up I-95 from South Carolina to Maryland without issues. This particular eCVT transmission fluid can be changed with a flush and fill procedure much like the engine! When driving on the highway at 70 MPH, eCVT makes loud noise audible in the Escape Hybrid cabin but I think it’s more due to inferior, compressed sound damping from the year 2007 than the eCVT. I’ve driven newer Subaru Foresters with CVT and the cabin is as quiet as any other car.
I took the Bronco Sport Base to the beach and tried driving it on the sand in the stock configuration. No special tires or airing down was done for this test to simulate the typical crossover user who just wants some light off roading. I set the GOAT mode to Sand and drove around in soft beach sand and the vehicle bottomed out during the 3 point turn and got stuck for a few tense seconds before barely working its way out. The Bronco Sport did as well as I would expect my Escape Hybrid to do under the same circumstances.
Without a locking differential and offroad tires, I just don’t see much advantage of the GOAT modes over the standard AWD/ABS setup of the Escape or other SUVs since the vehicle is physically limited in power distribution between the front and rear wheels. It’s hard to solve physical limitations with software so GOAT mode is a wash for most drivers with lower spec Bronco Sports.
The Bronco Sport has soft skid covers on the bottom but has lower ground clearance than the 1st gen Escape and other crossover competitiors on the market unless you get the premium Badlands trim.
The ride feels smooth and turning is sharp with a small turn radius in the Bronco Sport. I can feel the road; bumps and shocks are well dampened by the suspension and shocks. Non-adaptive cruise control works how you would expect and the car does not turn into a screamer when going up hills on the highway. The cruise control does not seem to allow the current speed to dip below the set speed. It would be nice if the cruise control in ECO drive mode would allow a few MPH decrease in speed when encountering a brief hill for fuel efficiency and noise.
The Bronco Sport driving range calculation is more accurate than my Escape’s calculation because the Bronco Sport appears to use the correct fuel volume in its calculation. The 1st gen Escape driving range calculation uses a 15 gal fuel tank volume but the fuel gauge actually measures ~13.5 gal, thus underestimating driving range and over reporting fuel economy because its 0% fuel level cannot measure the remaining 1.5 gal “reserve” fuel volume of the official capacity.
The Bronco Sport also retains physical climate controls that the driver can manipulate with knobs and buttons by feel without taking eyes off the road. The vehicle’s 4 analog dials for speedometer, tachometer, engine temperature, and fuel are a relief because the dashboard isn’t taken over by an iPad. There is a small digital pixel screen in the top half center but it’s sized small to show information and not demand your attention. Digital pixel screens with skeumorphic interfaces look dated a few years after release and never get updates. I prefer dot matrix and 7 segment displays that are are fully utilized by the displayed info and never go out of style.
The Bronco Sport does have a couple improvements to the 1st gen Ford Escape. The addition of new driving and safety technology such as Lane Keeping Assistance, Forward Collision Detection, and Adaptive Cruise Control (not available on base model) make long distance driving more enjoyable. Apple CarPlay allows the driver to use their phone as if it is part of the car. These improvements should be taken as granted in 2022 as all newer competitors in the Bronco Sport’s price range of $30-40k come standard with these quality of life and safety features.
The upgrades I want most in the next vehicle after my 2007 Escape Hybrid are Adaptive Cruise Control and CarPlay. The baseline features are what the 2007 Escape Hybrid already has. The Bronco Sport only checks one of the two wants at a reasonable price below $40k and does not have a hybrid model available.
Ford’s high pricing and holding back features such as Adaptive Cruise Control (marketed as copilot 360+) from the lower models and forcing them to be bundled into add-on packages with features that most drivers don’t actually want is unfriendly to customers who are interested in the Bronco Sport but don’t want to spend Bronco level money ($40-60k) on a less capable, reborn 1st/2nd gen Escape — the Bronco Sport. The comparable CX-5, Forester, RAV-4, and CRV crossovers either come standard or at a lower price point with a majority of the Bronco Sport’s optional features.
I was glad to drive the Bronco Sport for a couple days and evaluate it as a potential buyer. It had sporty exterior styling and modern quality of life features. The Bronco Sport is a good buy for someone who likes the off-road looks and needs a hatchback for all around use. It’s a vehicle that would do well in inclement conditions but should probably stay on paths that are made for vehicles. The Bronco Sport doesn’t need to stay on the pavement but definitely isn’t going to blaze its own path like a properly equipped and tired offroad vehicle or dune buggy.
If we are being honest, I really want to buy the Bronco Sport as a daily driver for the exterior looks and driver assistance/quality of life features. But it’s a hard sell compared to just as capable Ford Escape models for less money, more powertrain options, and more practical features. The Bronco Sport is the Ford Escape in disguise that you pay a premium for looks. I want to see a hybrid/plug-in EV Bronco/Bronco Sport line in upcoming years and hope to keep driving my 2007 Escape Hybrid until the “right” model is released in a few years.
]]>Support from users offsets operating costs and encourages me to spend time developing. I hope this NAVADMIN Viewer backstory provides information having Extended Access for $4.99/year within the iOS app.
I initially developed NAVADMIN Viewer as a side project in 2018. As you might expect, I didn’t think that most people would be interested in an accessible message viewer and this side project for reading NAVADMINs would end up only being used by me. Look, it’s literally named “NAVADMIN Viewer”. But, contrary to what I thought in 2018, app usage took off and I continued to develop and maintain NAVADMIN Viewer in my free time.
Over 4.5 years later NAVADMIN Viewer does a lot more than list NAVADMINs for 80k+ active unique yearly users. In 2022, the feature roadmap (as of 26SEP22) is below (see current live roadmap):
Feature | iOS | Android | Web |
---|---|---|---|
Native app | ✅ | ✅ | ✅ (JS) |
Near real-time message updates | ✅ | ✅ | ✅ |
All NAVADMIN/ALNAVs (~2010 and later) | ✅ | ✅ | ✅ |
All MARADMIN/ALMARs (~2015 and later) | ✅ | ✅ | ✅ |
All numbered DoD/DoN issuances | 🚧 | 🚧 | |
Full message search | ✅ | ✅ | 🟡/🚧 |
Offline messages | ✅ | ✅ | 🟡 |
Notifications on new message release | ✅ | 🚧 | 🚧 |
Message popularity ranking | ✅ | ✅ | |
Auto-detection of referenced publications | ✅ | 🚧 | 🚧 |
Customizable message font | ✅ | ||
Bookmark messages | ✅ | ||
iCloud bookmark sync | ✅ | N/A | N/A |
Handoff, Spotlight Search, Siri Shortcuts | ✅ | N/A | N/A |
Runs on MacOS (the computer) | ✅ | N/A | N/A |
Runs on Internet Explorer 11 | N/A | N/A | ✅ |
Legend Symbol | Meaning |
---|---|
✅ | Implemented |
🟡 | Partial support |
🚧 | On roadmap |
N/A | Not Applicable |
As seen in the chart above, not every platform has support for all features. I use an iPhone SE as my personal phone so naturally my development time is more focused on iOS. Each app is written in its platform’s native language. I don’t use a cross-platform framework 🤢 for NAVADMIN Viewer because I believe that native applications provide the best experience to users in this use case. Native applications can more fully utilize platform APIs and access these APIs sooner without relying on a cross-platform middle layer to be updated or bug fixed.
Some of these features are made possible by the NAVADMIN Viewer server infrastructure that aggregates and delivers message data. The iOS, Android, and Web apps fetch message data from the server application for load balancing and uniformity. The earliest version of NAVADMIN Viewer got its message directly from the official source. Any irregularities at the source (ex: site transition/deleted messages) required a full app update to fix so I ended up creating a server that provided messages to the app; now any changes at the message source were handled at the server for uninterrupted app message delivery. Reducing 3 parsing logic in 3 languages into 1 language saved hours and keeps me sanity 😉
+---------------+ +--------------+ +--------------+
| iOS Client | |Android Client| | Web app |
| | | | | |
| Objective C | | Kotlin | | Javascript |
+-----------|---+ +------|-------+ +---|----------+
| | |
+---------+ | +----------+
| | |
+-|---|---|-+ +-------------+
| REST API |------ | Redis cache |
+-----------+ | + | +-------------+
|Data source|------|Data Parser| +----------+
+-----------+ | |-------|PostgreSQL|
| Golang | |database |
+-----------+ +----------+
The server consists of a front facing REST API and concurrent data processor written in Golang. This forward application caches a limited amount of messages in RAM and a Redis cache. The server utilizes a PostgreSQL database for storing everything else which is mostly persistent message data.
Alright here we are. Operating NAVADMIN Viewer costs money and my time is valuable.
NAVADMIN Viewer is a side project. I spend my time developing NAVADMIN Viewer because it’s fun and makes an impact helping sailors and others who read administrative messages. I’ve never put ads in NAVADMIN Viewer and strive to keep it that way. If you’re wondering if ads make “a lot of money” — they don’t.
Compute time and data transfer have real costs. NAVADMIN Viewer is able to avoid major costs and outages by automation and running it on a hybrid of free and low cost cloud hosting. NAVADMIN Viewer’s main server was running on a free server instance on Heroku for the past 4 years. Many PaaS cloud hosts have eliminated their free/low cost offerings lately. Heroku is discontinuing free products at the end of 2022. Heroku’s pricing scales exponentially 📈 – I would like to avoid that so I’ve migrated NAVADMIN Viewer to new infrastructure to continue running. In addition to the main server, Redis and PostgreSQL hosting has some additional money and time cost to maintain.
The iOS version of NAVADMIN Viewer lets ALL users access the 3 latest messages. I added Extended Access as an in-app purchase to unlock access to all remaining messages. All users can continue searching through all messages and see which messages contain their search term but Extended Access is needed to view the full message.
💯Users who previously supported me via in-app purchase get Extended Access for free. Android and Web versions of NAVADMIN Viewer continue to be free to use. NAVADMIN Viewer on iOS remains the flagship version that gets new features first.
If you find NAVADMIN Viewer helpful and that it saves you time and frustration (AKA it accelerates your life), please consider supporting development and operations by getting Extended Access for $4.99/year.
💸 Not convinced? Think $4.99/year is too much? How much is your and my time worth? ⏳ Here’s a couple things which Extended Access costs less than:
You can reach me at support@ansonliu.com.
Google Play - https://play.google.com/store/apps/details?id=com.ansonliu.navadmin
If the user has unsubscribed to all existing promotional emails, it’s bad form to create a new email category for additional marketing messages and opt all users into this created category. Block, Inc (formerly Square) is the latest company to opt users into exciting new “messages” which claim:
Your favorite businesses may send you messages and rewards via Square like the one below.
Really? Block knows that I am not interested in Block/Square promotional emails due to me explicitly unsubscribing from previous emails but Block has created a new email category for local business promotions, generously allowing advertisers to reach my inbox through the Block/Square brand.
This is a sorry excuse for trying to increase the click through rate for a dying category of advertising. The marketing tactic is to carpetbomb users with so numerous promotional emails and one of the “new” email lists will inevitably show an increase the click through rate. It’s hard to not improve the new promotion category’s click through rate if it was zero to begin with so from a marketing growth and improvment standpoint, it will always be a success.
These unsolicited emails cost very little to send but inflict a huge drain on users’ time and concentration to sort through and unsubscribe. Block, Inc is basically selling users’ inboxes and time to third parties. Third parties are able to reach previously unreachable users using Block’s vast email list and high email server reputation.
For the unaware: Most users don’t trust/know how unsubscribing works and simply let their inboxes accumulate messages over the years. Some users believe that the “unsubscribe” link in every email is a phishing attempts to gather valid emails for future spam and thus do not unsubscribe out of a well founded, undue sense of safety. No one has time to verify which of the below senders is most authentic:
Users simply resign to ignoring the regular promotional emails which end up sitting in an inbox that never gets manually sorted. Gmail and other email providers’ spam filters use global spam samples to determine categorization which is helpful but not perfect. For the vast majority, email is just a means to an end that must be endured.
Block’s marketing department probably claims that these emails are “geographically targeted” so that users will only get promotions from local businesses. Even if the emails are more supposedly more relevant (hint: they aren’t), users never asked for these emails and even fewer will go through the trouble of unsubscribing.
Even if Block automatically adds users to a new promotional email category, at least users can easily unsubscribe with a single click, right?
You would expect a reputable company such as Block/Square to have an accessible one-click unsubscribe method (RFC 8058). Simply click the unsubscribe link at the bottom of the promotional email to opt out.
Oh. Unsubscribed from the Spice 6 Modern Indian email list.
This does not mean the user is unsubscribed from the new local business promotional messages. You’ve merely opted out from getting email blasts from the Spice 6 Modern Indian restaurant. You’ll still get emails from any other third parties that Block is more than willing to sell access to you to. I know this because I got another promotional email from Block hawking a Korean restaurant 2 days later.
Maybe the user needs to sign into their Square account to unsubscribe from a larger marketing email category. This takes more than one click for the user and requires that they either have their phone available for an SMS code or remember their password. I don’t have my Square password memorized and I’m sure most users don’t either. Depending on the user’s work environment, their phone may not be available so they end up using the password reset link to login. That’s a lot of clicks.
Marketers will say that authenticating the user is necessary so that a misclick by a second user whom the first user forwards the email to does not accidentally unsubscribe the first user, thus robbing the first user of the opportunity to get valuable offers from third parties.
In no world will a recipient forward these marketing emails to someone who then clicks the unsubscribe link resulting in the original recipient missing out on anything. In fact, the forwarded recipient probably did both of them a favor.
Ok, so we’ve logged into Square to hopefully stop receiving emails from this new marketing list. There’s no option to unsubscribe to anything, there’s not even an option to change the email. This is where 99% of users give up because users have better things to do in life — and so do we.
Block/Square isn’t the only offender of opting users into new promotional categories, recent memory recalls at least Linkedin and Twitter making similar changes to their notifications and email lists.
Rather than waste more time trying to opt-out of this new category of spam, we can mark it for what it is as a sample for our email providers’ spam filters and hope that the filter can save others from the same fate.
]]>