Upgrading from Drupal 9 to Drupal 10

Let’s prepare:

composer config -g allow-plugins.mglaman/composer-drupal-lenient true
composer config -g allow-plugins.chx/jump true
composer config -g allow-plugins.chx/drupal-issue-fork true
composer require mglaman/composer-drupal-lenient chx/jump chx/drupal-issue-fork
composer jump
rm composer.lock
git commit -am 'd10 prepare'

Now try composer install.

if you run into errors, some of your contrib is not D10 ready. Note I found composer error messages to be completely useless when the lock file is present, they are somewhat useful when it is not: it will contain the name of the offending module somewhat close to the bottom.

Reset with rm composer.lock and edit composer.json so the module installs. These composer edits can be done by composer itself. Visit the drupal.org home of the project and look around.

  1. Sometimes, there is a D10 compatible version but it’s not marked latest.

    composer require --no-update 'drupal/elasticsearch_connector:^7.0@alpha'
    If this is not the case, continue to the issue queue.
  2. A patch is preferred because it makes updates still possible and so when it no longer applies the patch can simply be removed:

    composer config --merge --json extra.patches.drupal/encryption '{"D10": "patches/encryption/d10.patch"}'
    composer config --merge --json extra.drupal-lenient.allowed-list '["drupal/encryption"]'
  3. Do not use merge diffs from drupal directly like https://git.drupalcode.org/project/encryption/-/merge_requests/4.diff because that leads to a supply change attack.

  4. Issue forks can be used instead of patches but while patches will self report when they are no longer needed in the future, forks do not. But if the project needs composer.json changes to install with D10 there’s no choice. The composer.json changes are described in the handbook. As noted there, there’s a plugin to automate this too:

    composer drupal-issue-fork https://git.drupalcode.org/issue/brandfolder-3286340/-/tree/3286340-automated-drupal-10

    Later, when the branch has been merged, you can run composer drupal-issue-unfork brandfolder to remove the issue fork and upgrade the version to the latest. This command also merely edits composer.json.

Now commit the new composer.json, I like git commit --amend -a -C HEAD but of course separate commits for each edit also work.

Now repeat the install until success.

If you follow the real best practices, don’t forget to git add vendor/mglaman vendor/chx at prepare and do a git clean -f vendor web/modules web/core web/libraries web/themes after reset.

October 28, 2023

More praise for decorators

Our problem was the marketing team wanted information in Marketo about our visitor struggling with forms. Makes total sense. Better explanation of what was expected, more client side validation etc makes for a smoother experience. However, hook_ajax_render_alter only contains the AJAX commands being sent and does not have any form information and the myriad extendsion points in Form API do not have access to the AJAX commands. What now?

A little background

One of the most important feature of Drupal has always been extensibility. It had the hook system since the dawn of time which allowed adding and changing data structures at various points of the code flow. However, rare cases have been always been a problem: what if a hook was not available? It’s fairly impossible to think of every possible use case ahead of the time, after all.

Another extension point was the ability to replace certain include files wholesale, for example to facilitate different path alias storages.

In Drupal 8 both still exist but vastly expanded. Events joined hooks, and a ton of functionality is in plugins which are identified by their id and the class providing the relevant functionality — similarly to the include files — can be replaced wholesale.

Now, all this replaceability is great but what happens when two modules want to replace the same file? Their functionality might not even collide, they might want to change different methods but as the replaceability is class level, there is no other choice but to replace the entire class. Note the situation is not always this bad because of derivatives — it’s possible originally one class provided the functionality for say every entity type but if only a specific entity type needs a different implementation, it’s possible to provide a plugin class for a derivative, see the NodeRow class in Views for a simple example.

Now, for plugins we have no other choices but the complete replacement with the derivative functionality as described providing some relief but a lot of functionality is in services. And while there is alter functionality for services which is neither a hook nor an event because both depend on services it suffers from the same problem: what happens when two modules want to replace the same service?

Thankfully, for services there is a better way, they are called decorators.


For the original problem we needed to find the bridge between form API and AJAX commands — there must be one!

Indeed, the form_ajax_response_builder service implements an interface with just one method which receives the form API information and an initial set of commands and builds a response out of it. It’s real lucky this was architected like this — the only non-test call in core calls it with an empty set of initial commands and so it wouldn’t have been unreasonable to not have this argument and then we would be in a pickle but as it is, we can decorate it. This means our service will replace the original but at the same time the original will not be tossed but rather renamed and passed to ours and we will call it:

  decorates: form_ajax_response_builder
  class: Drupal\sd8\Sd8FormAjaxResponseBuilder
  arguments: ['@sd8.form_ajax_response_builder.inner', '@marketo_ma']

And the shape of the class is this:

class Sd8FormAjaxResponseBuilder implements FormAjaxResponseBuilderInterface {
  public function __construct(FormAjaxResponseBuilderInterface $ajaxResponseBuilder, MarketoMaServiceInterface $marketoMaService) {
    $this->ajaxResponseBuilder = $ajaxResponseBuilder;
    $this->marketoMaService = $marketoMaService;

  public function buildResponse(Request $request, array $form, FormStateInterface $formState, array $commands) {
    // Custom code comes here adds commands to $commands to taste.
    // ...
    // And then we call the original.
    return $this->ajaxResponseBuilder->buildResponse($request, $form, $formState, $commands);

The name of our service is 100% irrelevant as it’ll be renamed to form_ajax_response_builder. Now if two modules want to mess with AJAX forms, they do not step on each others toes. We do not rely at all on the form_ajax_response_builder service being the core implementation. Although with just a single method it is less important but take care of implementing every method of the interface and call the inner service instead of extending the core original and just overriding the one you need. You can’t know whether the service you decorate will always be the core functionality. Be a good neighbour. It’s only a bit more work, mostly simple typing. And as the Writing the tranc module article mentioned, you might discover some bugs and problems when properly delegating.

So this is alter on steroids: if we need to change some functionality provided by a service and no official method exists, you can decorate it, write some boilerplate implementing the interface by calling the inner service methods and Bob’s your uncle.

September 28, 2020


  1. USB C is a physical connector. It has four high speed lanes and assorted tidbits: most importantly, power, a separate pair of wires for USB 2.0 and finally one wire to negotiate power and data mode.
  2. Everything is negotiated: which end behaves as a power provider and which end behaves as a power sink. Which end behaves as the downstream data port (host) and which one is the upstream port (device). What kind of data will be transmitted.
  3. Power: 5V 3A for legacy devices, this is always available and is the only thing that requires no negotiation merely a few resistors. Up to 60W (20V 3A) is possible with every USB C-C cable, the voltage and amperage is negotiated. 100W (20V 5A) requires a special cable. Some 5V only devices do not implement the specification properly and can only be used with an A-C cable or from an 5V only USB C charger. r/UsbCHardware/ calls these broken” for good reasons.
  4. The high speed lanes can carry USB signals, DisplayPort signals or Thunderbolt signals (in theory they could carry anyhing but these ones are used in reality).
  5. USB needs one lane to transmit and one lane to receive 5 or 10gbit per second USB data. As mentioned, USB 2.0 speed is always available, separately.
  6. DisplayPort can use two or four lanes to transmit video data. It is possible to use two lanes for DisplayPort and two lanes for USB. DisplayPort data is commonly 4.32Gbps per lane effective video bandwidth as defined in DisplayPort 1.2 (5.4gbps with overhead), more rarely it can be 6.5Gbps per lane as defined in DisplayPort 1.3 (8.1Gbps with overhead). The latter requires DisplayPort 1.4 (1.3 alone is not used in practice) support from the host which is rare because Intel integrated GPUs are DP 1.2 except for Ice Lake” and Tiger Lake” chips. Video bandwidth calculators: 1 2. Practically all USB C - DP adapters work with DP 1.4 without a problem as these adapters just negotiate the correct mode on the USB C and do not touch or even know anything about the actual DisplayPort signal.
  7. Thunderbolt is a different world, it requires special cables. It occupies all four lanes, it’s a bus with a 40gbit/s data rate. It carries a mixture of PCI Express and DisplayPort data. The PCI Express data speed is nerfed by Intel to 22gbps although many laptops with a single TB3 port can only do 16gbps. The TB3 bus does not carry USB signals, USB ports are provided by a USB root hub built into the dock’s TB3 controller. The only supplier of TB3 controller ICs is Intel. They have two generations of chips, the older Alpine Ridge only supports DisplayPort 1.2, Titan Ridge also supports DisplayPort 1.4.
  8. USB 4 is Thunderbolt 3 with a very important addition: now USB packets will be found on the bus too. This eliminates the hotplugged USB root hub for stability and much better overall user experiences. Also, it’s very likely PCIe will be able to reach 32gbps this time (or maybe even 40gbps with PCIe 4.0?). This mode, however will be optional. Everything above still applies so USB 4 ports wil be even more confusing in their capabilities.
  9. To avoid this confusion, Intel decided to name USB 4 with every feature required” Thunderbolt 4.

To run multiple monitors:

  1. The DisplayPort standard has its own thing where it can split the data coming out of a single connector to multiple displays. This is called MST and is not supported by Mac OS.
  2. Thunderbolt behaves as if there were two DisplayPort connectors and is the only way for Mac OS to run multiple monitors while plugging a single cable into the host. Plugging two cables saves you hundreds of dollars. Caveat: many laptops with a single Thunderbolt port uses an Alpine Ridge LP controller and only have one DisplayPort 1.2 worth on the bus. You can check whether your has it here even if you don’t run Linux, the components list is correct.
  3. The above was for two monitors and that’s the top of Mac capability. For Windows, some first party docks (Lenovo, Dell) have MST hubs built into them. These are much cheaper on eBay, often the cheap auctions will come without a power brick. Both Lenovo and Dell have standardized on their power bricks within the brand, so any high wattage Dell brick will work for a Dell dock, same for Lenovo. Make sure to buy them from the USA, even if it is purported as original Dell, Chinese auctions are kinda sus.

To run multiple laptops from the same monitor / USB peripherals aka KVM:

  1. I only know of one USB C switch, it’s industrial and breathtakingly expensive.
  2. There are some KVM switches which have USB C inputs and legacy outputs: iogear GUD2C04 Access Pro, Black Box USB-C 4K KVM.
  3. The cheapest solution by far is to forget USB C and use the software KVM. It detects when a USB A switch connects/disconnects the peripherals and sends the monitor a request to switch inputs. This obviously only works if a monitor has multiple inputs but most do.

To connect USB C monitors:

  1. Belkin has a VR cable which plugs into USB A and DisplayPort inputs and a USB C monitor.
  2. The Wacom Link Plus has USB A, HDMI and DisplayPort inputs and a USB C output.
  3. The Dell WD19 is an USB C hub which has a USB C downstream capable port. This is unique.
  4. TB3 docks with a downstream (chaining) TB3 port are also usable as a plain USB C port which is also DisplayPort alternate mode capable.


  1. Naming is not a strength of the USB IF. 5gbps USB is called USB 3.0, USB 3.1 Gen 1, USB 3.2 Gen 1, Superspeed USB. 10gbps USB is called USB 3.1 Gen 2, USB 3.2 Gen 2, Superspeed Plus USB. We typically just call them 5/10gbps USB to avoid wading into this mess.
  2. The faster the data speed, the shorter the cable. Cables omitting high speed lanes (so only USB 2.0 and charging is possible) can be 4m long, 5gbps lane speed allows for 2m, 10gbps only allows for 1m. There are two ways to escape these limits: the cheap way where marketing will spin a story on how a cable made from the finest chinesium can surpass the spec and the expensive way where active circuitry will be added to cable to avoid the signal loss. Cable Matters has a 3M 10gbps, a 5M 5gbps and that’s it for affordable active USB C cables. Thunderbolt cables can be 0.5m for 40gbps (although some 0.8m cables have appeared recently, Plugable is recommended), up to 2m for 20gbps passive or up to 2m for 40gbps with an active cable. The active cables can only be used for Thunderbolt, not plain USB except for the Apple Thunderbolt 3 Pro.
  3. Docks touting 4k support” very, very often mean 4K @ 30Hz” because they utilize two lanes for DisplayPort and two lanes for USB 3.0 and that’s what two lanes worth of DisplayPort is capable of. In reality noone wants a 30Hz monitor so up to 3440 x 1440 @ 60Hz and 1080p @ 144Hz are typical max resolutions used with these docks.(HDMI 1.4 can only do 1080p @ 120Hz, you need DisplayPort for 144Hz). Again, video bandwidth calculators: 1 2. If you need USB 3.0 then these are the maximum without Thunderbolt (and without DisplayPort 1.4).

Avoid the following:

  1. Docks passing PD power with removable cables. You need DC input for such. More in this article. tl;dr: every cable, including the one between the hub and the device has a loss, if it’s captive and short then the loss can be calculated, otherwise you’d need to do boost power and do full power delivery renegotiation which most hubs don’t do.
  2. Magnetic cables not only violate specifications but pose immense danger to the host: they expose the pins the normal connector hides within a grounding shroud (note how DisplayPort, HDMI, USB and more has this general design) and a static discharge might fry the device.
  3. Sometimes when trying to escape the bandwidth limits you will find docks utilizing something called DisplayLink. The biggest tell of it is the ability to run video from a USB 3.0 (aka USB A port). These are good for running for office apps but not much else. Gaming will especially suck. In general, you should avoid these. Disappointment is almost guaranteed.

August 28, 2020

There are two modules providing lat/lon storage in Drupal 8/8: geolocation and geofield. I went with geofield simply because geocluster is using it. geocluster clusters on the server side and that lets you display an astonishing amount of elements on a single map. While geolocation is mostly a single module, there’s an entire family of modules we will need here. Also, as far as I can tell, we are limited to leaflet here because I can’t find ready made geojson support for anything else.

  1. composer require drupal/geocluster drupal/views_geojson drupal/leaflet drupal/leaflet_geojson
  2. Add a geofield (NOT a geocluster — that’s a bug, a patch has been filed), I called it coordinates
  3. Add a view, no page, no block, nothing.
  4. leave the pager in place. We will remove it later.
  5. Add field Geocluster lat (coordinates). Leave aggregator settings on Group results together”.
  6. Add field Geocluster lon (coordinates)
  7. Add field Geocluster result count (coordinates)
  8. Add field coordinates and exclude it from display
  9. Add title and exclude it from display. Set aggregator settings to GROUP_CONCAT”.
  10. Remove sort criteria.
  11. Now add a GeoJSON export and in settings click Enable geocluster for this search.
    1. Set Map Data Sources to Other: Lat/Lon Point
    2. Set Latitute field to Geocluster lat (coordinates)
    3. Set Longitude field to Geocluster lon (coordinates)
    4. Set Title field to Title.
  12. Now magic has happened! If you set up Views to show the query, you will see GROUP BY node__coordinates_coordinates_geocluster_index_1 is the only GROUP BY left. This is why it works at all.
  13. You can now remove the pager.
  14. Add a Views GeoJSON: Bounding box contextual filter
    1. Provide default value
    2. Type: Query Parameter
    3. Query parameter: bbox
    4. Fallback value: -180,-90,180,90
  15. The leaflet_geojson module provides a block which miraculously (well, based on the GeoJSON export) will pick up up our views and it’ll just work.

August 28, 2020

Our bug was views-view-field.html.twig coming back empty for the entity API field track_icon on the node 80136.
The template only prints the output variable.
Where is that coming from?

function template_preprocess_views_view_field(&$variables) {
  $variables['output'] = $variables['field']->advancedRender($variables['row']);

In FieldPluginBase::advancedRender I set a breakpoint

$values->_entity->id() == 80136 && $this->table == 'taxonomy_term__track_icon'

we hit my breakpoint exactly twice as expected because we print this node twice
the second time $raw_items = $this->getItems($values); comes back empty (not good!)
EntityField::getItems() runs $build_list = $this->getEntityFieldRenderer()->render($values, $this);
in turn EntityFieldRenderer::render runs into Pick the render array for the row / field we are being asked to render, and remove it from $this->build to free memory as we progress.
the meat is $build = $this->build[$row->index][$field_id];
now, $build when broken only contains cache metadata, nothing else. Just a #cache key, with context and tags.
Paging through the class, it’s not a lot of code, we find the buildFields method which iterates every row in the result and calls EntityViewDisplay to run formatters:

$display_build = $display->buildMultiple($bundle_entities);  

EntityViewDisplay::buildMultiple has this gem

$build_list[$id][$name] = $field_access->isAllowed() ? $formatter->view($items, $view_langcode) : [];  
// Apply the field access cacheability metadata to the render array.  
$this->renderer->addCacheableDependency($build_list[$id][$name], $field_access);  

that is beyond suspicious
because that’s where the nothing but cacheable metadata” might very well come from
so we slap a $name === 'track_icon' && !$build_list[$id][$name] breakpoint here (we know $name is the field name because foreach ($this->getComponents() as $name => $options) {
doesnt fire
that was a good shot
let’s try $name === 'track_icon' && !isset($build_list[$id][$name]['#theme'])
that fires the expected amount of times, yay
so i set a breakpoint on

$build_list[$id][$name] = $field_access->isAllowed() ? $formatter->view($items, $view_langcode) : [];

itself for name track_icon and id 1 and step through
we ran afoul of

if (static::$recursiveRenderDepth[$recursive_render_id] > static::RECURSIVE_RENDER_LIMIT) {  

in EntityReferenceEntityFormatter::viewElements
The recursion protection is only increased ever, never decreased.
Despite the name of the constant RECURSIVE_RENDER_LIMIT the doxygen accurately tells you it actually has nothing to do with recursion and this is a feature not a bug:

The number of times this formatter allows rendering the same entity.

I will, for now, live without this feature.

August 28, 2020

Writing the tranc module

We hit Content and interface translation don’t clearly separate. I set out to fix it for ourselves and then released it back. It’s possible the solution is not correct or not useful for anyone else, nonetheless some of the coding challenges worth talking about.

You need the tranc module at hand to make any sense of this post.

Of decorators and testing

It’d be tempting to write TrancTranslationManager extends TranslationManager and narrow down the class to change the langcode in getStringTranslation and be done with it. This would, however, introduce a strong coupling between our module and TranslationManager and a future core upgrade might just break it, perhaps in some subtle fashion. Instead we use the decorate pattern, implement the relevant interfaces with each method changing the langcode as necessary and delegating to the original string_translation service. Another advantage besides possible future bugs: the webprofiler replaces the class on the string_translation service — if it were decoraring, tranc and webprofiler could be run at the same time without a problem. Good thing we do not use the webprofiler. Someone should file a patch against it to decorate…

Yet another advantage of this decorator class is the closedness of it. We know every nook and cranny of it. We can reason about it. Even without a test, we can confidently say this is doing what it’s supposed to do. It is very easy to see the external dependencies: there is exactly one call to anything not the translationManager. Of course, bugs might still happen: maybe some of the arguments of the proxy calls have the wrong order, maybe we left out a return. On the other hand, the IDE would not let us leave out a return or introduce one where one is not needed. I would actually argue against writing a unit test against a class like this: it will be the expression of the same logic in a different format and just an unncessary maintanance headache. It will definitely not find bugs. In fact, the first version of this class had a very unexpected bug — one that neither a unit nor a kernel test would find!

The simplest test is to enable the module and visit a page. This blows up. W.T.F. As the doxygen notes LanguageRequestSubscriber class calls a public method on the TranslationManager class which is not on the interface. This happens to be a core bug so that’s great: we discovered a core bug which should be easy to be fixed. Adding methods to interfaces are not considered a BC break. This is the fundamental problem with many testing and indeed object oriented programming itself: you imagine a world and fit your test or class to it. But what happens when the world does not adhere to the mental model of a puny programmer?? Sucks to be you, that’s what happens.

Speaking of doxygen, that doxygen is absolutely necessary and useful. Putting phpcs enforced doxygen on protected $languageManager saying The language manager”, however, is just clutter. Unless forced, don’t do this either.

Of Twig and documentation

Another part of the module is changing the default theme to print in the content language. I know enough of Twig that changing a template from code requires a visitor but it’s been a very, very long time since I wrote one. So before I wrote a single line of code, I read https://twig.symfony.com/doc/2.x/api.html and https://twig.symfony.com/doc/2.x/internals.html Well, I only read the Basics section and then came Rendering and I stopped there because it didn’t look relevant. The internals page looks much more relevant and it’s short enough. I also explored the core Twig integration: TwigEnvironment, TwigExtension (only down to getName the rest is very clearly not relevant to us, it’s implementations of various Drupal specific Twig functionanlity) and TwigNodeVisitor. TwigNodeVisitor makes us very happy because it changes a filter to another which is exactly what we need to change the t filter to tc. But how will we know we are in the default theme? Well, on the Drupal half we can fish out the default theme from somewhere and on the Twig half, I dunno, surely a Twig node carries its filename. Well, Node::getFilename has this most helpful message:

@trigger_error('The '.__METHOD__.' method is deprecated since version 1.27 and will be removed in 2.0. Use getTemplateName() instead.', E_USER_DEPRECATED);

This really is very helpful because I would have never guessed getTemplateName is the filename! It is certainly not documented anywhere I can find. Once you have it, of course it’s easy to verify, for example Compiler:

$this->filename = $node->getTemplateName();

As for finding the default theme, I Googled drupal 8 get default theme, the first non-drupal StackExchange answer is ThemeHandler::getDefault. This returns a string but there’s also a getTheme method on the theme handler, it returns an Extension object which has the getPath method we need. So that’s a done. (While none of the Drupal SE answers are a direct answer, this answer can be used to deduce the correct method despite it is only mentioning the deprecated setDefault method — surely there’s a getDefault).

It’s worth implementing the visitor this far.

For the trans tag, I decided I wanted to change the langcode in its options as that seeemed much easier than introducing a transc tag. First, I wanted to write a little exploratory script to see what {% trans %} parses into. If you look at the internals page it shows how to get to the nodes. The whole page has three lines of code, let’s try to make them work. The first line of code uses three variables: $twig, $source, $identifier. The explanation mentions $twig is an environment and while it’s not crosslinked, the API page mentions environments and also our core read tells us that Drupal::service('twig') returns just that. That was our first variable, the second is $source is just the Twig template we want to parse. Now what’s $identifier? Mystery! Neither the API nor this page ever mentions it. I left it empty, and the tokenizer and the parser ran fine but the compiler have complained it can’t find the template. Ah ha! Where did we read about defining templates on the fly? Right, we just read the core TwigEnvironment class which in renderInline reminded us Drupal has inline templates. I have tried putting {# inline_template_start #} in front of my little Twig template, that didn’t work. I searched the Drupal codebase for this curious string and there are not many results, StringLoader::exists looks interesting and highly relevant: it looks at the template name and if it starts with this string, it declares it exists. How do we set the template name…? Well our chain started with Source, peeking into the Twig Source class confirms our suspicions: what the internals page calls $identifier is just the template name (which above already turned out to be the filename normally… what a mess). So:

$twig = \Drupal::service('twig');
$string = '{% trans %}x{% endtrans %}';
$stream = $twig->tokenize(new \Twig\Source($string, '{# inline_template_start #}'));
$nodes = $twig->parse($stream);

drush scr test.php works. We can print $nodes to see the nodes and we can print the compiled code to see. Phew! We can go bolder and do the same for:

$string = "{% trans with {'context': 'foo', 'langcode': 'bar'} %}x{% endtrans %}";

And print $nodes now tells us everything we needed: the node is of class TwigNodeTrans, the options are an ArrayExpression, the strings are wrapped in ConstantExpression, our visitor pretty much writes itself from this point.

Now we want to test this… If you followed the aforementioned renderInline call chain you would have seen

$loader = new ChainLoader([
    new ArrayLoader([$name => $template]),
    $current = $this->getLoader(),

which tells us the way this template gets registered is via new ArrayLoader([$name => $template]). We learned on the API page that an environment needs a loader and we have one. So, using this info, the TrancNodeVisitorTest::testTrancNodeVisitor method almost writes itself, it’s just a little bit more than the exploratory script above. It needs the core twig extension so that the trans tag can get registered and the tranc twig extension as well but since those doesn’t depend on the actual test case, they are created in setUp. Making a core extension is stolen from the core TwigExtensionTest, just modernized slightly. Our extension needs a theme handler mock, not too hard either.

We can summarize our journey by saying Twig is extremely powerful and even worse documented than Drupal. The source code, however, is very well structured, the classes are small and almost all method names are self explanatory. Once you know how to get to the Twig nodes (which now you do! especially the test case is very generic), simply printing them out tells you everything. Who needs documentation when you have such wonderful debug features? Imagine if printing a Drupal content entity similarly printed the name of the fields, the field item list classes, the field item classes, the properties and their values. Sci-fi. On the other hand, I love spending each weekend on some interesting project. Hmmmmm…

June 21, 2020