From a Neurodivergent Drupal to a Neurotypical one
About two years ago a friend of mine posted her ADHD diagnosis on Facebook. From the outpouring support in comments it suddenly became clear half my feed has ADHD. I am quite selective in Facebook friends, half of the people are high school classmates, half of them old time Drupalers. This was the latter half. I seeked diagnosis — and, of course, I am as well.
Today I talked to a relatively new contributor to Drupal core — he started before Drupal 8 came out. Yup, in my world a mere nine years is a new contributor :D He is doing a very awesome job, mind you. When we discussed some big changes necessary for core he pointed out the reason some might not get done because it provides no business value. I tried to describe the old says as driven by passion and challenge. and having fun. How the old pipeline was make a small website — get interested / passionate in Drupal — contribute — get hired and now it seems contributors are much more doing it because they are paid to do so.
I have a very old blog post where I have correctly identified I am doing Drupal core because of the dopamine hit (see having fun above…) although I ascribed it to flow and not ADHD simply because I didn’t know much about ADHD back then.
And then tonight this tweet shows up from @adhdjesse whose newsletter taught me about Rejection Sensitive Dysphoria (I so, so badly wish I knew about this like 15+ years ago) here’s what it says:
What’s the most helpful thing you’ve learned about ADHD? For me, it’s about motivation.
Neurotypicals are (primarily) motivated by:
- importance
- rewards
- consequences
ADHDers are (primarily) motivated by:
- interest
- creativity/novelty
- challenge
- urgency
I am screaming here how well this matches. Just based on the interactions described above, it is becoming quite clear old Drupal was written mostly by ADHD people and now it is written by neurotypicals. It’s not better or worse, it’s different.
Ps.: and yes, urgency matches too, see the policy change from no backwards compatibility, rush, rush to a full backwards compatible timed release cycle now.
Upgrading from Drupal 9 to Drupal 10
Let’s prepare:
composer config -g allow-plugins.mglaman/composer-drupal-lenient true
composer config -g allow-plugins.chx/jump true
composer config -g allow-plugins.chx/drupal-issue-fork true
composer require mglaman/composer-drupal-lenient chx/jump chx/drupal-issue-fork
composer jump
rm composer.lock
git commit -am 'd10 prepare'
Now try composer install
.
if you run into errors, some of your contrib is not D10 ready. Note I found composer error messages to be completely useless when the lock file is present, they are somewhat useful when it is not: it will contain the name of the offending module somewhat close to the bottom.
Reset with rm composer.lock
and edit composer.json
so the module installs. These composer edits can be done by composer itself. Visit the drupal.org home of the project and look around.
-
Sometimes, there is a D10 compatible version but it’s not marked latest.
If this is not the case, continue to the issue queue.composer require --no-update 'drupal/elasticsearch_connector:^7.0@alpha'
-
A patch is preferred because it makes updates still possible and so when it no longer applies the patch can simply be removed:
composer config --merge --json extra.patches.drupal/encryption '{"D10": "patches/encryption/d10.patch"}' composer config --merge --json extra.drupal-lenient.allowed-list '["drupal/encryption"]'
-
Issue forks can be used instead of patches but while patches will self report when they are no longer needed in the future, forks do not. But if the project needs composer.json changes to install with D10 there’s no choice. The composer.json changes are described in the handbook. As noted there, there’s a plugin to automate this too:
composer drupal-issue-fork https://git.drupalcode.org/issue/brandfolder-3286340/-/tree/3286340-automated-drupal-10
Later, when the branch has been merged, you can run
composer drupal-issue-unfork brandfolder
to remove the issue fork and upgrade the version to the latest. This command also merely edits composer.json.
Do not use merge diffs from drupal directly like https://git.drupalcode.org/project/encryption/-/merge_requests/4.diff because that leads to a supply change attack. Instead, save the patch locally and apply as above.
Now commit the new composer.json
, I like git commit --amend -a -C HEAD
but of course separate commits for each edit also work.
Now repeat the install until success.
If you follow the real best practices, don’t forget to git add vendor/mglaman vendor/chx
at prepare and do a git clean -f vendor web/modules web/core web/libraries web/themes
after reset.
More praise for decorators
Our problem was the marketing team wanted information in Marketo about our visitor struggling with forms. Makes total sense. Better explanation of what was expected, more client side validation etc makes for a smoother experience. However, hook_ajax_render_alter
only contains the AJAX commands being sent and does not have any form information and the myriad extendsion points in Form API do not have access to the AJAX commands. What now?
A little background
One of the most important feature of Drupal has always been extensibility. It had the hook system since the dawn of time which allowed adding and changing data structures at various points of the code flow. However, rare cases have been always been a problem: what if a hook was not available? It’s fairly impossible to think of every possible use case ahead of the time, after all.
Another extension point was the ability to replace certain include files wholesale, for example to facilitate different path alias storages.
In Drupal 8 both still exist but vastly expanded. Events joined hooks, and a ton of functionality is in plugins which are identified by their id and the class providing the relevant functionality — similarly to the include files — can be replaced wholesale.
Now, all this replaceability is great but what happens when two modules want to replace the same file? Their functionality might not even collide, they might want to change different methods but as the replaceability is class level, there is no other choice but to replace the entire class. Note the situation is not always this bad because of derivatives — it’s possible originally one class provided the functionality for say every entity type but if only a specific entity type needs a different implementation, it’s possible to provide a plugin class for a derivative, see the NodeRow
class in Views for a simple example.
Now, for plugins we have no other choices but the complete replacement with the derivative functionality as described providing some relief but a lot of functionality is in services. And while there is alter functionality for services which is neither a hook nor an event because both depend on services it suffers from the same problem: what happens when two modules want to replace the same service?
Thankfully, for services there is a better way, they are called decorators.
Decorators
For the original problem we needed to find the bridge between form API and AJAX commands — there must be one!
Indeed, the form_ajax_response_builder
service implements an interface with just one method which receives the form API information and an initial set of commands and builds a response out of it. It’s real lucky this was architected like this — the only non-test call in core calls it with an empty set of initial commands and so it wouldn’t have been unreasonable to not have this argument and then we would be in a pickle but as it is, we can decorate it. This means our service will replace the original but at the same time the original will not be tossed but rather renamed and passed to ours and we will call it:
sd8.form_ajax_response_builder:
decorates: form_ajax_response_builder
class: Drupal\sd8\Sd8FormAjaxResponseBuilder
arguments: ['@sd8.form_ajax_response_builder.inner', '@marketo_ma']
And the shape of the class is this:
class Sd8FormAjaxResponseBuilder implements FormAjaxResponseBuilderInterface {
public function __construct(FormAjaxResponseBuilderInterface $ajaxResponseBuilder, MarketoMaServiceInterface $marketoMaService) {
$this->ajaxResponseBuilder = $ajaxResponseBuilder;
$this->marketoMaService = $marketoMaService;
}
public function buildResponse(Request $request, array $form, FormStateInterface $formState, array $commands) {
// Custom code comes here adds commands to $commands to taste.
// ...
// And then we call the original.
return $this->ajaxResponseBuilder->buildResponse($request, $form, $formState, $commands);
}
}
The name of our service is 100% irrelevant as it’ll be renamed to form_ajax_response_builder
. Now if two modules want to mess with AJAX forms, they do not step on each others toes. We do not rely at all on the form_ajax_response_builder
service being the core implementation. Although with just a single method it is less important but take care of implementing every method of the interface and call the inner service instead of extending the core original and just overriding the one you need. You can’t know whether the service you decorate will always be the core functionality. Be a good neighbour. It’s only a bit more work, mostly simple typing. And as the Writing the tranc module article mentioned, you might discover some bugs and problems when properly delegating.
So this is alter
on steroids: if we need to change some functionality provided by a service and no official method exists, you can decorate it, write some boilerplate implementing the interface by calling the inner service methods and Bob’s your uncle.
USB C
- USB C is a physical connector. It has four high speed lanes and assorted tidbits: most importantly, power, a separate pair of wires for USB 2.0 and finally one wire to negotiate power and data mode.
- Everything is negotiated: which end behaves as a power provider and which end behaves as a power sink. Which end behaves as the downstream data port (host) and which one is the upstream port (device). What kind of data will be transmitted.
- Power: 5V 3A for legacy devices, this is always available and is the only thing that requires no negotiation merely a few resistors. Up to 60W (20V 3A) is possible with every USB C-C cable, the voltage and amperage is negotiated. 100W (20V 5A) requires a special cable. Some 5V only devices do not implement the specification properly and can only be used with an A-C cable or from an 5V only USB C charger. r/UsbCHardware/ calls these “broken” for good reasons.
- The high speed lanes can carry USB signals, DisplayPort signals or Thunderbolt signals (in theory they could carry anyhing but these ones are used in reality).
- USB needs one lane to transmit and one lane to receive 5 or 10gbit per second USB data. As mentioned, USB 2.0 speed is always available, separately.
- DisplayPort can use two or four lanes to transmit video data. It is possible to use two lanes for DisplayPort and two lanes for USB. DisplayPort data is commonly 4.32Gbps per lane effective video bandwidth as defined in DisplayPort 1.2 (5.4gbps with overhead), more rarely it can be 6.5Gbps per lane as defined in DisplayPort 1.3 (8.1Gbps with overhead). The latter requires DisplayPort 1.4 (1.3 alone is not used in practice) support from the host which is rare because Intel integrated GPUs are DP 1.2 except for “Ice Lake” and “Tiger Lake” chips. Video bandwidth calculators: 1 2. Practically all USB C - DP adapters work with DP 1.4 without a problem as these adapters just negotiate the correct mode on the USB C and do not touch or even know anything about the actual DisplayPort signal.
- Thunderbolt is a different world, it requires special cables. It occupies all four lanes, it’s a bus with a 40gbit/s data rate. It carries a mixture of PCI Express and DisplayPort data. The PCI Express data speed is nerfed by Intel to 22gbps although many laptops with a single TB3 port can only do 16gbps. The TB3 bus does not carry USB signals, USB ports are provided by a USB root hub built into the dock’s TB3 controller. The only supplier of TB3 controller ICs is Intel. They have two generations of chips, the older Alpine Ridge only supports DisplayPort 1.2, Titan Ridge also supports DisplayPort 1.4.
- USB 4 is Thunderbolt 3 with a very important addition: now USB packets will be found on the bus too. This eliminates the hotplugged USB root hub for stability and much better overall user experiences. Also, it’s very likely PCIe will be able to reach 32gbps this time (or maybe even 40gbps with PCIe 4.0?). This mode, however will be optional. Everything above still applies so USB 4 ports wil be even more confusing in their capabilities.
- To avoid this confusion, Intel decided to name “USB 4 with every feature required” Thunderbolt 4.
To run multiple monitors:
- The DisplayPort standard has its own thing where it can split the data coming out of a single connector to multiple displays. This is called MST and is not supported by Mac OS.
- Thunderbolt behaves as if there were two DisplayPort connectors and is the only way for Mac OS to run multiple monitors while plugging a single cable into the host. Plugging two cables saves you hundreds of dollars. Caveat: many laptops with a single Thunderbolt port uses an “Alpine Ridge LP” controller and only have one DisplayPort 1.2 worth on the bus. You can check whether your has it here even if you don’t run Linux, the components list is correct.
- The above was for two monitors and that’s the top of Mac capability. For Windows, some first party docks (Lenovo, Dell) have MST hubs built into them. These are much cheaper on eBay, often the cheap auctions will come without a power brick. Both Lenovo and Dell have standardized on their power bricks within the brand, so any high wattage Dell brick will work for a Dell dock, same for Lenovo. Make sure to buy them from the USA, even if it is purported as original Dell, Chinese auctions are kinda sus.
To run multiple laptops from the same monitor / USB peripherals aka KVM:
- I only know of one USB C switch, it’s industrial and breathtakingly expensive.
- There are some KVM switches which have USB C inputs and legacy outputs: iogear GUD2C04 Access Pro, Black Box USB-C 4K KVM.
- The cheapest solution by far is to forget USB C and use the software KVM. It detects when a USB A switch connects/disconnects the peripherals and sends the monitor a request to switch inputs. This obviously only works if a monitor has multiple inputs but most do.
To connect USB C monitors:
- Belkin has a VR cable which plugs into USB A and DisplayPort inputs and a USB C monitor.
- The Wacom Link Plus has USB A, HDMI and DisplayPort inputs and a USB C output.
- The Dell WD19 is an USB C hub which has a USB C downstream capable port. This is unique.
- TB3 docks with a downstream (chaining) TB3 port are also usable as a plain USB C port which is also DisplayPort alternate mode capable.
Footnotes:
- Naming is not a strength of the USB IF. 5gbps USB is called USB 3.0, USB 3.1 Gen 1, USB 3.2 Gen 1, Superspeed USB. 10gbps USB is called USB 3.1 Gen 2, USB 3.2 Gen 2, Superspeed Plus USB. We typically just call them 5/10gbps USB to avoid wading into this mess.
- The faster the data speed, the shorter the cable. Cables omitting high speed lanes (so only USB 2.0 and charging is possible) can be 4m long, 5gbps lane speed allows for 2m, 10gbps only allows for 1m. There are two ways to escape these limits: the cheap way where marketing will spin a story on how a cable made from the finest chinesium can surpass the spec and the expensive way where active circuitry will be added to cable to avoid the signal loss. Cable Matters has a 3M 10gbps, a 5M 5gbps and that’s it for affordable active USB C cables. Thunderbolt cables can be 0.5m for 40gbps (although some 0.8m cables have appeared recently, Plugable is recommended), up to 2m for 20gbps passive or up to 2m for 40gbps with an active cable. The active cables can only be used for Thunderbolt, not plain USB except for the Apple Thunderbolt 3 Pro.
- Docks touting “4k support” very, very often mean “4K @ 30Hz” because they utilize two lanes for DisplayPort and two lanes for USB 3.0 and that’s what two lanes worth of DisplayPort is capable of. In reality noone wants a 30Hz monitor so up to 3440 x 1440 @ 60Hz and 1080p @ 144Hz are typical max resolutions used with these docks.(HDMI 1.4 can only do 1080p @ 120Hz, you need DisplayPort for 144Hz). Again, video bandwidth calculators: 1 2. If you need USB 3.0 then these are the maximum without Thunderbolt (and without DisplayPort 1.4).
Avoid the following:
- Docks passing PD power with removable cables. You need DC input for such. More in this article. tl;dr: every cable, including the one between the hub and the device has a loss, if it’s captive and short then the loss can be calculated, otherwise you’d need to do boost power and do full power delivery renegotiation which most hubs don’t do.
- Magnetic cables not only violate specifications but pose immense danger to the host: they expose the pins the normal connector hides within a grounding shroud (note how DisplayPort, HDMI, USB and more has this general design) and a static discharge might fry the device.
- Sometimes when trying to escape the bandwidth limits you will find docks utilizing something called DisplayLink. The biggest tell of it is the ability to run video from a USB 3.0 (aka USB A port). These are good for running for office apps but not much else. Gaming will especially suck. In general, you should avoid these. Disappointment is almost guaranteed.
There are two modules providing lat/lon storage in Drupal 8/8: geolocation and geofield. I went with geofield simply because geocluster is using it. geocluster clusters on the server side and that lets you display an astonishing amount of elements on a single map. While geolocation is mostly a single module, there’s an entire family of modules we will need here. Also, as far as I can tell, we are limited to leaflet here because I can’t find ready made geojson support for anything else.
composer require drupal/geocluster drupal/views_geojson drupal/leaflet drupal/leaflet_geojson
- Add a geofield (NOT a geocluster — that’s a bug, a patch has been filed), I called it coordinates
- Add a view, no page, no block, nothing.
- leave the pager in place. We will remove it later.
- Add field Geocluster lat (coordinates). Leave aggregator settings on “Group results together”.
- Add field Geocluster lon (coordinates)
- Add field Geocluster result count (coordinates)
- Add field coordinates and exclude it from display
- Add title and exclude it from display. Set aggregator settings to “GROUP_CONCAT”.
- Remove sort criteria.
- Now add a GeoJSON export and in settings click Enable geocluster for this search.
- Set Map Data Sources to Other: Lat/Lon Point
- Set Latitute field to Geocluster lat (coordinates)
- Set Longitude field to Geocluster lon (coordinates)
- Set Title field to Title.
- Now magic has happened! If you set up Views to show the query, you will see
GROUP BY node__coordinates_coordinates_geocluster_index_1
is the onlyGROUP BY
left. This is why it works at all. - You can now remove the pager.
- Add a Views GeoJSON: Bounding box contextual filter
- Provide default value
- Type: Query Parameter
- Query parameter: bbox
- Fallback value: -180,-90,180,90
- The leaflet_geojson module provides a block which miraculously (well, based on the GeoJSON export) will pick up up our views and it’ll just work.
Our bug was views-view-field.html.twig
coming back empty for the entity API field track_icon
on the node 80136.
The template only prints the output
variable.
Where is that coming from?
function template_preprocess_views_view_field(&$variables) {
$variables['output'] = $variables['field']->advancedRender($variables['row']);
}
In FieldPluginBase::advancedRender
I set a breakpoint
$values->_entity->id() == 80136 && $this->table == 'taxonomy_term__track_icon'
we hit my breakpoint exactly twice as expected because we print this node twice
the second time $raw_items = $this->getItems($values);
comes back empty (not good!)
EntityField::getItems()
runs $build_list = $this->getEntityFieldRenderer()->render($values, $this);
in turn EntityFieldRenderer::render
runs into Pick the render array for the row / field we are being asked to render, and remove it from $this->build to free memory as we progress.
the meat is $build = $this->build[$row->index][$field_id];
now, $build
when broken only contains cache metadata, nothing else. Just a #cache
key, with context and tags.
Paging through the class, it’s not a lot of code, we find the buildFields
method which iterates every row in the result and calls EntityViewDisplay
to run formatters:
$display_build = $display->buildMultiple($bundle_entities);
EntityViewDisplay::buildMultiple
has this gem
$build_list[$id][$name] = $field_access->isAllowed() ? $formatter->view($items, $view_langcode) : [];
// Apply the field access cacheability metadata to the render array.
$this->renderer->addCacheableDependency($build_list[$id][$name], $field_access);
that is beyond suspicious
because that’s where the “nothing but cacheable metadata” might very well come from
so we slap a $name === 'track_icon' && !$build_list[$id][$name]
breakpoint here (we know $name
is the field name because foreach ($this->getComponents() as $name => $options) {
doesnt fire
darn
that was a good shot
let’s try $name === 'track_icon' && !isset($build_list[$id][$name]['#theme'])
that fires the expected amount of times, yay
so i set a breakpoint on
$build_list[$id][$name] = $field_access->isAllowed() ? $formatter->view($items, $view_langcode) : [];
itself for name track_icon and id 1 and step through
bingo
we ran afoul of
if (static::$recursiveRenderDepth[$recursive_render_id] > static::RECURSIVE_RENDER_LIMIT) {
in EntityReferenceEntityFormatter::viewElements
The recursion protection is only increased ever, never decreased.
Despite the name of the constant RECURSIVE_RENDER_LIMIT
the doxygen accurately tells you it actually has nothing to do with recursion and this is a feature not a bug:
The number of times this formatter allows rendering the same entity.
I will, for now, live without this feature.