Libraries can start processes too!

It’s the kind of thing that, if you are used to other languages, would make you very suspicious. Library code starting its own processes — you can almost feel the mess and sense the bugs just musing over it. Yet in Elixir (and, of course, Erlang too), this is a totally normal thing to do.

The Elixir approach to shared mutable state is wrapping it in a process. In this case, I needed a counter and the easiest way to implement it is to use an Agent which is a kind of process designed to handle simple state. In this case, the get_and_update function allows me to return the counter and increment it as an atomic operation.

To start an Agent you use the Agent.start_link function. But where to call it? Do I have to add some equivalent initializer function to my library? While not exactly onerous it felt awkward somehow like my implementation was leaking into the caller. Then again did I have to stop the agent process somewhere? Where would I do that?

Now I figured out how to manage the life-cycle of the agent process myself within the library. But it turns out to be unnecessary. All I had to was make one change to my mix.exs file and one addition to a module in my library.

def application do
  [
    extra_applications: [:logger]
  ]
end

becomes:

def application do
  [
    extra_applications: [:logger],
    mod: {Ergo, []}
  ]
end

along with changing the referenced library module, Ergo to look like:

defmodule Ergo
  use Application

  def start(_type, _args) do
    Supervisor.start_link([Ergo.AgentModule], strategy: :one_for_one)
  end

  …
end

This is enough that any application using my library (called Ergo, btw) knows will automatically start the Agent and manage its life-cycle. Without me, or the calling application, needing to know anything about it at all.

This is a pretty neat trick.

Elixir Protocols vs Clojure Multimethods

I am coming to appreciate José Valim’s creation, Elixir, very much. It is fair to say that Rich Hickey set a very high bar with Clojure and Elixir for the most part saunters over it. I think that is a reflection of two very thoughtful language designers at work. But there is some chafing when moving from Clojure to Elixir.

I wrote previously about how Clojure’s metadata feature lets one subtly piggyback data in a way that doesn’t require intermediate code to deal with it, or even know it exists. It creates a ‘functional’ backchannel. Elixir has no equivalent feature.

If your data is a map you can use “special” keys for your metadata. Elixir does this itself with the __struct__ key that it injects into the maps it uses as the implementation of structs. You mostly don’t know it’s there but would have to implement a special case if you ever treated the struct as a map.

However, if the value you want to attach metadata to is, say, an anonymous function then you’re out of luck. In that case, you have to convert your function to a map containing the function and metadata and then change your entire implementation. That could be a chore, or worse.

Today I hit the problem of wanting to define a function with multiple implementations depending on its arguments. Within a module, this is not hard to do as Elixir functions allow for multiple heads using pattern matching. It’s one of the beautiful things about writing Elixir functions. So:

defmodule NextWords do
  def next-word("hello"), do: "world"
  def next-word("foo"), do: "bar"
end

Works exactly as you would expect. Of course, the patterns can be considerably more complex than this and allow you to, for example, match values inside maps. So you could write:

def doodah(%{key: "foo"}), do: "dah"
def doodah(%{key: "dah"}), do: "doo"

And that would work just as you expect too! Fantastic!

But what about if you want the definitions of the function spread across different modules?

defmodule NextWords1 do
  def next-word("hello"), do: "world"
end

defmodule NextWords2 do
  def next-word("foo"), do: "bar"
end

This does not work because while NextWords1.next-word and NextWords2.next-word share the same name they are in all other respects unrelated functions. In Elixir a function is specified as a tuple of {module, name, arity} so functions in different modules are totally separate regardless of name & arity.

In Clojure, when I need to do something like this I would reach for a multi. A Clojure multi uses a function defined over its arguments to determine which implementation of the multi to use.

(defmulti foo [x y z] (fn [x y z] … turn x, y, z into a despatch value, e.g. :bar-1, :bar-2 or what have you))

(ns 'bar-1)
(defmethod foo :bar-1 [x y z] … implementation)

(ns 'bar-2)
(defmethod foo :bar-2 [x y z] … another implementation)

The dispatch function returns a dispatch value and the methods are parameterised on dispatch value. Different method implementations can live in different namespaces and a call with the right arguments will always resolve to the right implementation, regardless of where it was defined.

Now Elixir has an equivalent to multi, the Protocol. We can define the foo protocol:

defprotocol Fooish do
  def foo(x)
end

Now in any module, we can define an implementation for our Fooish protocol. But, and here’s the rub, an implementation looks like:

defmodule Foo-1 do
  defimpl Fooish, for: String
    def foo(s), do: …impl…
  end
end

So an Elixir protocol can only be parameterised on the type of its first argument! This means that there’s no way to dispatch protocol implementations based on lovely pattern matching. Disappointing.

It may even be that a smarter Elixir programmer than I could implement Clojure style multi-methods in Elixir. For now, I can find a work-around and I’m still digging Elixir a lot.

Value metadata is a subtle but useful language feature

Since I am using it to create AgendaScope, i’ve written quite a lot of Elixir code to get myself up to speed. Most recenty a parser combinator library, Ergo. And it’s in building Ergo that I’ve realised I really miss a feature from Clojure, metadata.

In short Clojure allows you to attach arbitrary key-value metadata to any value. While there is syntax sugar it boils down to:

meta(obj) to get the map of metadata key-value pairs for the value obj.

vary-meta(obj, f, args) to modify the meta-data associated with the value obj.

If you make a copy of obj the metadata is carried with it. In all other respects obj is unaffected. Why is this so useful?

Well I first used it to manage whether a map, representing a SQL row, comes from the database or not as well as associated data about the record. I could have put that in the map using “special” keys but this pollutes the map with stuff not strictly related to its purpose and more complex code with special cases to handle those keys. But what if obj wasn’t a map to which I could add special keys?

Clojure makes heavy use of it for self-documenting, tracking attributes, and tagging of all different kinds of structures.

What about Ergo? Well being a parser combinator library Ergo is about building functions, usually returning them as anonymous functions. When debugging a parser you’d really like to know more about it & how it was constructed but what you have is a function f.

In Clojure I’d just put all that in metadata of f and it would be no problem later to get at that metadata for debugging purposes by calling meta(f). The code of the parser itslef needing to know nothing about the metadata at all. But in Elixir… I am not sure what to do.

Elixir functions can have multiple heads so it occurs to me that I could return my parsers like:

fn
  %{Context} = ctx -> do parsing stuff here
  :info -> return some metadata instead
end

And this could work but also feels brittle to me. If a user defines a parser and doesn’t add an :info head then my debugging code that invokes f.(:info) is going to generate an error. It doesn’t smell right. Is this the best I can do?

I being to wish Elixir had an equivalent meta functionality to Clojure.

Connecting Elixir GenServer and Phoenix LiveView

My new company AgendaScope is building it’s product using the Elixir/Phoenix/LiveView stack and while I have spent some time in recent months learning Elixir and Phoenix, LiveView was entirely new to me. This weekend I decided it was time to crack it.

I had a little project in mind which is creating a dashboard of thermal info from my Mac. There’s a reason for that which is another post but suffice to say I needed information displayed on a second computer, preferably my iPad.

The way I chose to tackle this problem was to create a GenServer instance that would monitor the output of a few commands on a periodic basis (at the moment once every 5 seconds). The commands are:

thermal levels

powermetrics --samplers smc | grep -i "die temperature"

uptime

The Elixir Port module makes this a pretty trivial thing to do with the GenServer handle_info callback receiving the command output that I parse and store in the GenServer state.

On the other end it turns out LiveView is pretty simple. A LiveView template is an alternative to a regular static template (this part had not been clear to me before and I thought they sat, side-by-side).

The LiveView module in my case ThermalsLive stores things like CPU pressure, GPU pressure, & CPU die temp, in the Socket assigns. And a template is simple to create. LiveView takes care of all the magic of making things work over a web socket.

For example, in a LiveView template the markup <a href='#' phx-click='guess' phx-value-number='1'>1</a> generates a link that results in the running LiveView module receiving the handle_event callback with the value 1. Like this:

def handle_event("guess", %{"number" => guess} = data, %{assigns: %{score: score, answer: answer}} = socket) do
 …
end

Something I learned about LiveView is that each browser session gets its own stateful server side LiveView module that lasts throughout the session. When a LiveView event handler changes the assigns in the socket structure LiveView responds by re-rendering the parts of the template that depended on those particular assigns. This is good magic!

Now the question was: How does my GenServer (ThermalsService) notify the LiveView (ThermalsLive) when one of the variables it’s monitoring (cpu_pressure) has changed? This event is not coming from the browser so the regular LiveView magic isn’t enough.

I couldn’t find a good simple example and so what I’ve learned was gleaned from some books and Google searching. There short answer is Phoenix.PubSub, here’s the longer answer.

First we need a topic. When three friends are at a table for lunch their topic might be “politics” and one of them might tune out and not be listening. But when the topic changes to “cheese” they tune back in and hear those messages.

For our GenServer & LiveView the topic (defined as a module attribute @thermals_topic) is going to be thermals_update because I want them to talk about these and I don’t plan to use individual message types for different variables like cpu_pressure and gpu_die_temp.

On the LiveView end we need to subscribe to the same topic. The right place to do this appears to be in the mount callback where the socket assigns are first set.

def mount(_params, _session, socket) do
  if connected?(socket) do
    DashletWeb.Endpoint.subscribe(@thermals_topic)
  end
  …
end

We test if the socket is connected because it turns out a LiveView module calls mount twice. Once in response to the initial browser request where it returns the HTML page that starts a web socket connection back to the server. The second call to mount is in response to setting up the web socket and long-term communication between client and server. This seems to be the best point to subscribe to the topic.

It wasn’t obvious that the DashletWeb.EndPoint module had a subscribe method ready made for this purpose and I was mucking about the the Phoenix.PubSub module for a bit before I hit on this.

My GenServer process ThermalsService receives handle_info callbacks from the Port that contain the terminal output of the running command as a string. With a little bit of parsing & conversion we end up with something like cpu_pressure: 56 and as well as storing this in the GenServer state we need the LiveView module to know about it. We need to broadcast a message to the thermals_update topic.

This turned out to be pretty simple:

alias Phoenix.PubSub
@thermals_topic "thermals_update"

def handle_info({_port, {:data, text_line}}, state) do
  …
  PubSub.broadcast(Dashlet.PubSub, @thermals_topic, {"cpu_pressure", cpu_pressure)
  …
end

I confess I am still not sure what Dashlet.Pubsub is. By my Elixir knowledge it should be a module but I can’t find it defined anywhere. Anyway, it works.

The last piece of the puzzle that I couldn’t find stated but inferred from examples is that, for each broadcast, there is a call to the handle_info callback on all topic subscribers (in this case our LiveView module).

So, in my live view ThermalsLive I add:

def handle_info({"cpu_pressure", pressure}, socket) do
  {:noreply, assign(socket, cpu_pressure: pressure)}
end

This is super simple. It recevies the cpu_pressure message from the GenServer along with the pressure value and stores it in the LiveView socket assigns (the same way the handle_event callback would do in respond to clicking a browser link). This is enough for LiveView to take the hint and trigger a client update.

What foxed me was that regular LiveView events arrive via the handle_event callback that receives the LiveView socket instance. You need the socket to change its assigns to trigger an update. It didn’t occur to me that PubSub might ensure that the handle_info path would be equivalent.

And, there you have it, sending data from a GenServer via Phoenix PubSub to a LiveView module. I learned quite a bit via this exercise and I hope this might help anyone following the same path.

Stop aliasing String.t and integer

I’m learning Elixir and, a little unwillingly, learning about it’s system of type specifications. I will concede that being able to specify the types of return values is helpful but I am not sure about the rest. Nevertheless I am told the Dialyzer system (that analyses type information and looks for errors) can be very helpful so for now I am playing along.

While reading the documentation something caught my eye:

Defining custom types can help communicate the intention of your code and increase its readability.

defmodule Person do
   @typedoc """
   A 4 digit year, e.g. 1984
   """
   @type year :: integer

   @spec current_age(year) :: integer
   def current_age(year_of_birth), do: # implementation
end

I found myself puzzled by this statement. The argument in question year_of_birth is already clearly indicating it’s a year. And the type cannot enforce the rule “A 4 digit year, e.g. 1984”.

So what is the type spec adding? It seems to me that what it’s adding is a step of cognitive overhead in understanding that what is being passed is an integer.

I’ve seen other examples of creating types for username and password that alias the String.t type and again I find this unconvincing since the function being spec’d almost certainly calls it’s arguments username and password so what is being added?

Where a type adds useful information I buy it. A struct for example is already a kind of alias since it defines compound structure. But for primitive types like String.t and integer it seems like adding aliases is hiding information not adding to it.

Inserting associated models with Ecto

I’m building a “learning app” using the PETAL stack and it’s taxing some of the grey cells that haven’t worked since I was working with Ruby on Rails many years ago.

It’s hit a point where I need to insert two associated records at the same time. I am sure this involves Ecto.Multi but I’m also trying to understand how to build the form since the form_for expects an Ecto.Changeset and so far as I can see these are schema specific. Or, at least, in all the examples I’ve seen so far they have been.

I make a lot of introductions by email so I decided to build a little helper application to make it easier for me. My schema at the moment is quite simple:

Contact <-> Role <-> Company

At the moment I have a very simple CRUD approach where you create Contacts and Companies separately. But, of course, in practice when I create a Contact I want to create the Company and specify the Role at the same time. And that’s where I run into a problem. The common pattern is something like:

def new(conn, _params) do
  changeset = Introductions.change_contact(%Contact{})
  render(conn, "new.html", changeset: changeset)
end

In this case we are creating an Ecto.Changeset corresponding to a new Contact. Later when we want to build a form to edit the details we have:

<%= form_for @changeset, @action, fn f -> %>

Where the form relates the fields of the schema to the fields in the form.

So the question is how you create a “blended” or “nested” Changeset that can contain the details of each of the 3 schemas at work.

I’ve not seen any examples covering this case. I’m muddling my way through it but it would be great to have something to work from.