Elixir Protocols vs Clojure Multimethods

I am coming to appreciate José Valim’s creation, Elixir, very much. It is fair to say that Rich Hickey set a very high bar with Clojure and Elixir for the most part saunters over it. I think that is a reflection of two very thoughtful language designers at work. But there is some chafing when moving from Clojure to Elixir.

I wrote previously about how Clojure’s metadata feature lets one subtly piggyback data in a way that doesn’t require intermediate code to deal with it, or even know it exists. It creates a ‘functional’ backchannel. Elixir has no equivalent feature.

If your data is a map you can use “special” keys for your metadata. Elixir does this itself with the __struct__ key that it injects into the maps it uses as the implementation of structs. You mostly don’t know it’s there but would have to implement a special case if you ever treated the struct as a map.

However, if the value you want to attach metadata to is, say, an anonymous function then you’re out of luck. In that case, you have to convert your function to a map containing the function and metadata and then change your entire implementation. That could be a chore, or worse.

Today I hit the problem of wanting to define a function with multiple implementations depending on its arguments. Within a module, this is not hard to do as Elixir functions allow for multiple heads using pattern matching. It’s one of the beautiful things about writing Elixir functions. So:

defmodule NextWords do
  def next-word("hello"), do: "world"
  def next-word("foo"), do: "bar"
end

Works exactly as you would expect. Of course, the patterns can be considerably more complex than this and allow you to, for example, match values inside maps. So you could write:

def doodah(%{key: "foo"}), do: "dah"
def doodah(%{key: "dah"}), do: "doo"

And that would work just as you expect too! Fantastic!

But what about if you want the definitions of the function spread across different modules?

defmodule NextWords1 do
  def next-word("hello"), do: "world"
end

defmodule NextWords2 do
  def next-word("foo"), do: "bar"
end

This does not work because while NextWords1.next-word and NextWords2.next-word share the same name they are in all other respects unrelated functions. In Elixir a function is specified as a tuple of {module, name, arity} so functions in different modules are totally separate regardless of name & arity.

In Clojure, when I need to do something like this I would reach for a multi. A Clojure multi uses a function defined over its arguments to determine which implementation of the multi to use.

(defmulti foo [x y z] (fn [x y z] … turn x, y, z into a despatch value, e.g. :bar-1, :bar-2 or what have you))

(ns 'bar-1)
(defmethod foo :bar-1 [x y z] … implementation)

(ns 'bar-2)
(defmethod foo :bar-2 [x y z] … another implementation)

The dispatch function returns a dispatch value and the methods are parameterised on dispatch value. Different method implementations can live in different namespaces and a call with the right arguments will always resolve to the right implementation, regardless of where it was defined.

Now Elixir has an equivalent to multi, the Protocol. We can define the foo protocol:

defprotocol Fooish do
  def foo(x)
end

Now in any module, we can define an implementation for our Fooish protocol. But, and here’s the rub, an implementation looks like:

defmodule Foo-1 do
  defimpl Fooish, for: String
    def foo(s), do: …impl…
  end
end

So an Elixir protocol can only be parameterised on the type of its first argument! This means that there’s no way to dispatch protocol implementations based on lovely pattern matching. Disappointing.

It may even be that a smarter Elixir programmer than I could implement Clojure style multi-methods in Elixir. For now, I can find a work-around and I’m still digging Elixir a lot.

Value metadata is a subtle but useful language feature

Since I am using it to create AgendaScope, i’ve written quite a lot of Elixir code to get myself up to speed. Most recenty a parser combinator library, Ergo. And it’s in building Ergo that I’ve realised I really miss a feature from Clojure, metadata.

In short Clojure allows you to attach arbitrary key-value metadata to any value. While there is syntax sugar it boils down to:

meta(obj) to get the map of metadata key-value pairs for the value obj.

vary-meta(obj, f, args) to modify the meta-data associated with the value obj.

If you make a copy of obj the metadata is carried with it. In all other respects obj is unaffected. Why is this so useful?

Well I first used it to manage whether a map, representing a SQL row, comes from the database or not as well as associated data about the record. I could have put that in the map using “special” keys but this pollutes the map with stuff not strictly related to its purpose and more complex code with special cases to handle those keys. But what if obj wasn’t a map to which I could add special keys?

Clojure makes heavy use of it for self-documenting, tracking attributes, and tagging of all different kinds of structures.

What about Ergo? Well being a parser combinator library Ergo is about building functions, usually returning them as anonymous functions. When debugging a parser you’d really like to know more about it & how it was constructed but what you have is a function f.

In Clojure I’d just put all that in metadata of f and it would be no problem later to get at that metadata for debugging purposes by calling meta(f). The code of the parser itslef needing to know nothing about the metadata at all. But in Elixir… I am not sure what to do.

Elixir functions can have multiple heads so it occurs to me that I could return my parsers like:

fn
  %{Context} = ctx -> do parsing stuff here
  :info -> return some metadata instead
end

And this could work but also feels brittle to me. If a user defines a parser and doesn’t add an :info head then my debugging code that invokes f.(:info) is going to generate an error. It doesn’t smell right. Is this the best I can do?

I being to wish Elixir had an equivalent meta functionality to Clojure.

Connecting Elixir GenServer and Phoenix LiveView

My new company AgendaScope is building it’s product using the Elixir/Phoenix/LiveView stack and while I have spent some time in recent months learning Elixir and Phoenix, LiveView was entirely new to me. This weekend I decided it was time to crack it.

I had a little project in mind which is creating a dashboard of thermal info from my Mac. There’s a reason for that which is another post but suffice to say I needed information displayed on a second computer, preferably my iPad.

The way I chose to tackle this problem was to create a GenServer instance that would monitor the output of a few commands on a periodic basis (at the moment once every 5 seconds). The commands are:

thermal levels

powermetrics --samplers smc | grep -i "die temperature"

uptime

The Elixir Port module makes this a pretty trivial thing to do with the GenServer handle_info callback receiving the command output that I parse and store in the GenServer state.

On the other end it turns out LiveView is pretty simple. A LiveView template is an alternative to a regular static template (this part had not been clear to me before and I thought they sat, side-by-side).

The LiveView module in my case ThermalsLive stores things like CPU pressure, GPU pressure, & CPU die temp, in the Socket assigns. And a template is simple to create. LiveView takes care of all the magic of making things work over a web socket.

For example, in a LiveView template the markup <a href='#' phx-click='guess' phx-value-number='1'>1</a> generates a link that results in the running LiveView module receiving the handle_event callback with the value 1. Like this:

def handle_event("guess", %{"number" => guess} = data, %{assigns: %{score: score, answer: answer}} = socket) do
 …
end

Something I learned about LiveView is that each browser session gets its own stateful server side LiveView module that lasts throughout the session. When a LiveView event handler changes the assigns in the socket structure LiveView responds by re-rendering the parts of the template that depended on those particular assigns. This is good magic!

Now the question was: How does my GenServer (ThermalsService) notify the LiveView (ThermalsLive) when one of the variables it’s monitoring (cpu_pressure) has changed? This event is not coming from the browser so the regular LiveView magic isn’t enough.

I couldn’t find a good simple example and so what I’ve learned was gleaned from some books and Google searching. There short answer is Phoenix.PubSub, here’s the longer answer.

First we need a topic. When three friends are at a table for lunch their topic might be “politics” and one of them might tune out and not be listening. But when the topic changes to “cheese” they tune back in and hear those messages.

For our GenServer & LiveView the topic (defined as a module attribute @thermals_topic) is going to be thermals_update because I want them to talk about these and I don’t plan to use individual message types for different variables like cpu_pressure and gpu_die_temp.

On the LiveView end we need to subscribe to the same topic. The right place to do this appears to be in the mount callback where the socket assigns are first set.

def mount(_params, _session, socket) do
  if connected?(socket) do
    DashletWeb.Endpoint.subscribe(@thermals_topic)
  end
  …
end

We test if the socket is connected because it turns out a LiveView module calls mount twice. Once in response to the initial browser request where it returns the HTML page that starts a web socket connection back to the server. The second call to mount is in response to setting up the web socket and long-term communication between client and server. This seems to be the best point to subscribe to the topic.

It wasn’t obvious that the DashletWeb.EndPoint module had a subscribe method ready made for this purpose and I was mucking about the the Phoenix.PubSub module for a bit before I hit on this.

My GenServer process ThermalsService receives handle_info callbacks from the Port that contain the terminal output of the running command as a string. With a little bit of parsing & conversion we end up with something like cpu_pressure: 56 and as well as storing this in the GenServer state we need the LiveView module to know about it. We need to broadcast a message to the thermals_update topic.

This turned out to be pretty simple:

alias Phoenix.PubSub
@thermals_topic "thermals_update"

def handle_info({_port, {:data, text_line}}, state) do
  …
  PubSub.broadcast(Dashlet.PubSub, @thermals_topic, {"cpu_pressure", cpu_pressure)
  …
end

I confess I am still not sure what Dashlet.Pubsub is. By my Elixir knowledge it should be a module but I can’t find it defined anywhere. Anyway, it works.

The last piece of the puzzle that I couldn’t find stated but inferred from examples is that, for each broadcast, there is a call to the handle_info callback on all topic subscribers (in this case our LiveView module).

So, in my live view ThermalsLive I add:

def handle_info({"cpu_pressure", pressure}, socket) do
  {:noreply, assign(socket, cpu_pressure: pressure)}
end

This is super simple. It recevies the cpu_pressure message from the GenServer along with the pressure value and stores it in the LiveView socket assigns (the same way the handle_event callback would do in respond to clicking a browser link). This is enough for LiveView to take the hint and trigger a client update.

What foxed me was that regular LiveView events arrive via the handle_event callback that receives the LiveView socket instance. You need the socket to change its assigns to trigger an update. It didn’t occur to me that PubSub might ensure that the handle_info path would be equivalent.

And, there you have it, sending data from a GenServer via Phoenix PubSub to a LiveView module. I learned quite a bit via this exercise and I hope this might help anyone following the same path.

Codesign is not a heap of fun

My Apple Developer certificate had expired so I had to get a new one. Having forked out the Apple-Tax I duly had a new certificate but…

codesign went into a loop prompting me for my password.

mdwrite popped up asking for permission on my metadata keychain which I didn’t know I had (seems is a file in ~/Library/Keychains).

And after all that nonsense the app wouldn’t run.

Message from debugger: Error 1

Not very helpful. I went off to find the crash report file (in ~/Library/DiagnosticReports) and that was more useful:

EXC_CRASH (Code Signature Invalid)

Hrmmm…

Digging deeper with codesign ---verify --verbose ~/…/MacOS/Mentat got me back:

CSSMERR_TP_NOT_TRUSTED

Well that’s fine but what does it mean?

Checking my KeyChain my Apple Developer certificate was listed as “Not trusted” but why?

Eventually, someone in the Core Intuition Slack gave me that answer. My Apple WWDR certificate, although not expired, was out of date. So I downloaded and installed the latest certificate.

I nuked the DerivedData folder and built from scratch and I am back in business.

I’m glossing over the time I spent trying to figure this out, ask the wrong (and later righter) questions, and stare in despair at this problem not of my own making.

I’m grateful for the help or this would be baffling me yet.

Applications as Platforms

I had an interesting exchange with David Buchan over the weekend regarding Mentat.

I’ve done a very bad job articulating, even to myself, what Mentat is supposed to be. This monograph attempts to lay out the foundations of that.

Applications have become a very common part of our life. I spend a big chunk of every day in my web browser or email applications as I guess you do. Unless you are a developer you probably don’t think too much about them except for the odd gripe or chafing when they don’t work the way you would like.

Every application is a function of a series of choices, made by the developers, about how they should work. Those choices are themselves a function of a series of beliefs about what their application should be.

To the extent that your beliefs about what any given application is for, and your preferences about how to do the work, mirror those of the developers you are likely to be more or less happy with your experience.

This leads to a situation in which some applications are arbitrarily limited by their developers. The developers simplify what is possible into a set of (they hope) good choices that will suit the most users. For those users, the lack of available choices can be a blessing as the application feels like it fits them.

However, if you’ve had the experience of working in such an application when it doesn’t match your preferences — it can feel painful, cramped, unintuitive. It is, I’ve always suspected, how left-handed people feel when given right-handed scissors.

At the other extreme are applications that can be extensively customised. That is to say that if you don’t like the default behaviour of the application, that behaviour can be changed to better suit your preferences.

The downside of this approach is complexity. Every choice must be customised and that can lead to a mess of configuration choices that are hard to understand or predict. We can see the difference if we consider the configurability of two popular text editors (and not counting those options relating to fonts, style, or programming language syntax):

TextEdit — 26 preference choices

Sublime Text — 113 preferences

Sublime Text can be extensively customised but at the cost of a much steeper learning curve compared to TextEdit since you have to understand what those options do and how they interact with each other and with the tasks you are trying to perform.

If you are spending a lot of time in your editor (as developers, a prime target for Sublime Text, often do) then it can be worth it as you may, ultimately, become far more productive at editing tasks than someone who sticks with TextEdit.

While there is a huge difference in the kind of text editor they are, both TextEdit and Sublime Text are definitely text editors. You wouldn’t suddenly find yourself reading your email or doing your taxes in either of them.

Doing your taxes is maybe a stretch but reading & writing emails. Why wouldn’t you use your favourite editor for handling what are, for the most part, text editing tasks?

Well probably because the developers decided that being an email client was not an important part of being a text editor. This is about the choice of what an application should be. It’s purpose.

Emacs is another text editor that, like Sublime Text, offers a huge amount of possibilities to customise its behaviour. But there is something else about Emacs that sets it apart even from editors like Sublime Text: Emacs is also a platform.

In the computing world, a platform is something that you can build applications upon. For example, macOS is a platform. It an environment (which is a kind of Application we experience as a widow) that offers a rich set of APIs on which other applications, like TextEdit, can be constructed. But TextEdit is not a platform. You can’t build anything new on top of TextEdit. It will always be a text editor.

So when I say that Emacs is a platform what I mean is that it provides the infrastructure on which other things can be built. In this example, Emacs is really two things sharing one name: Emacs the platform & Emacs the text editing application that runs on Emacs the platform.

The macOS platform provides the Objective-C (and Swift) languages and the Cocoa API to create applications that run within it. The Emacs platform provides the Emacs Lisp language and APIs for displaying & managing text to create applications that run within it.

Much of the behaviour of Emacs the editor is written in Emacs Lisp. If you don’t like how the editor works it is possible to change the source code and alter its behaviour at a fundamental level. Or if you want to do something completely different you can write new source code to create new functionality and new applications that live within it. Indeed people have written email clients, outliners, and even web browsers inside Emacs!

Another example, that is less well known but closer to my heart, is Radio Userland written by Dave Winer. Radio was an application for reading RSS feeds and publishing a blog. But, under the hood, both of these were implemented atop the Frontier platform with its Usertalk language.

This meant that Radio could be extensively customised, re-written, and extended to handle new needs. And lots of us did that. In my case, we built a tagging framework on top of Radio that allowed us to create an online service for the semantic aggregation of blogs.

All of which brings me back to Mentat (If you got this far I guess you are wondering where I am going with this).

There are lot of applications for managing structured and semi-structured knowledge of different kinds. But, like TextEdit, they are applications. Their purpose and their behaviour, while customisable to a greater or lesser degree, is fixed.

For example we could see a ‘To Do’ list as a form of semi-structured information. What I mean by ‘semi-structured’ is that there is an overall shape to a ‘To Do’ list but within that there may be a lot of flexibility about an item in the list.

It seems like every week someone launches a new To Do list management app which has a slightly different twist on the purpose and bundles a different set of features to fit a different kind of audience. Some are simple like the iOS Reminders app, good for remembering what to buy at the supermarket, some are complex like Asana, good for coordinating large projects involving dozens or hundreds of people.

Email too is semi-structured. You have an inbox that contains a list of messages and each message has metadata fields (e.g. ‘From’, ‘Subject’, ‘Sent’) and a body that could contain anything. There are dozens and dozens of different email clients again ranging from the simple to the complex.

Semi-structuredness is not, I think, binary but a spectrum. A novel or a poem could be considered unstructured as it is a flowing series of words and punctuation and authors often play with the format. On the other hand, a novel could be considered to be structured in that it’s words under chapter headings and those chapters create a structure. File formats like SGML & XML are attempts to map structure onto essentially unstructured text.

In my own realm, I have discovered an explosion of semi-structured information that I try to grapple with and make use of. Things like facts (though I might not formally label them as such), notes, to do’s, links, bookmarks, books, and especially questions. I do like a good question.

Over the years I have been through many different applications that attempt to help me wrangle such things. They have been successful or unsuccessful to some degree as they have fit my needs or not. But none has ever truly stuck.

Right now I am making a lot of use of Roam Research which describes itself as ‘A note-taking tool for networked thought.’ In practice Roam is a browser-based outliner, editor, and cross-referencing tool. It is very good at what it does and, indeed, it caused me a big problem for Mentat because it did some of the core things I wanted to so well that I signed up for 3 years and more or less abandoned Mentat.

However I can already see the cracks. The developers of Roam have a vision in mind and… it’s not my vision. They have a set of priorities for what they work on and… it is not my set of priorities. Decisions made in how Roam is implemented have serious consequences for how it can be used.

As much as I use Roam now and think it’s a great app, I can’t see myself using it 5 years time and certainly not in 10. But do I think that semi-structured information will still be important to me then?

There’s one other thing about Emacs that I think is worth mentioning. It was first released in 1976 and has been in constant development to this day. In 1986 GNU Emacs was released that included the Emacs Lisp language making Emacs truly a platform. While there have been many changes a user of Emacs today would recognise the same application from 30+ years ago. If you learned Emacs in 1986 you could still be using it today.

My conjecture is that this is because Emacs is a platform that exposes much of its funtionality via Emacs Lisp. This means that the applications built on the platform can be customised to fit the needs of the user of the platform and that they can evolve as those needs shift.

Evolution is perhaps the missing aspect of applications vs. platforms. When something is delivered as a platform it allows the user a great latitude to customise applications to fit their needs and to evolve those applications as their needs change.

What I have not found to this date is a credible platform for working with (both individually and in a group) semi-structured information on which I can build the kinds of application that I want to use, working the way I want to use them, and evolving with my own agenda (which has also changed drastically over the years).

I want a platform that, like Emacs, could still be relevant and useful thirty years from now!

Mentat then is an attempt to create such a platform. To create a platform that can store, manipulate, receive/transmit, and visualise semi-structured information and an environment for building applications that depend upon the information stored.

Stop aliasing String.t and integer

I’m learning Elixir and, a little unwillingly, learning about it’s system of type specifications. I will concede that being able to specify the types of return values is helpful but I am not sure about the rest. Nevertheless I am told the Dialyzer system (that analyses type information and looks for errors) can be very helpful so for now I am playing along.

While reading the documentation something caught my eye:

Defining custom types can help communicate the intention of your code and increase its readability.

defmodule Person do
   @typedoc """
   A 4 digit year, e.g. 1984
   """
   @type year :: integer

   @spec current_age(year) :: integer
   def current_age(year_of_birth), do: # implementation
end

I found myself puzzled by this statement. The argument in question year_of_birth is already clearly indicating it’s a year. And the type cannot enforce the rule “A 4 digit year, e.g. 1984”.

So what is the type spec adding? It seems to me that what it’s adding is a step of cognitive overhead in understanding that what is being passed is an integer.

I’ve seen other examples of creating types for username and password that alias the String.t type and again I find this unconvincing since the function being spec’d almost certainly calls it’s arguments username and password so what is being added?

Where a type adds useful information I buy it. A struct for example is already a kind of alias since it defines compound structure. But for primitive types like String.t and integer it seems like adding aliases is hiding information not adding to it.

My problem with browsers

In this post I am going to outline the challenge that I face in using browser to do my work. In a future post I will draw together some ideas about what I can do about it.

I am one of those people who routinely has 200 or more open tabs in my browser, spread across several windows. I realise I am a degenerate case but there we are, I don’t seem likely to change.

But browsers like Chrome, Safari, and Brave even though they are mostly capable of handling this aren’t designed for you to work with hundreds of tabs. You end up with things like this. Not, I would argue, very useful.

Tabs don’t work past a certain limit.

Then again bookmarks, either in the browser or using services like pinboard.in, are not a very effective approach to dealing with the problem. I can use them to “hide” tabs … but I find that I either forget about them entirely and lose track of what I was looking in to, or I feel uncomfortable because something is hidden from me. History isn’t a great tool here either; it’s like bookmarks but worse, especially since Chrome only holds on to history for 3 months!

The problem is one of doing research, across a range of threads, and multiple open loops.

For example, I am:

  • Researching business models for a guide I am writing for work (2 weeks+)
  • Exploring a range of topics related to the Elixir programming language, Ecto database layer, and Phoenix web stack (3 months+)
  • Carrying on a long-term research project about business strategy (2 years+)
  • Keeping up on current news (day-to-day)
  • Looking into how to do customer success in a SaaS context (4 weeks+)
  • Researching and building parser combinators (2 months+)
  • Sketching the outline and business model of a new application (5 months+)
  • Working on a pitch deck (3 months+)
  • Looking at some Minecraft resources (12 months+)
  • Exploring pricing & pricing support services (6 months+)
  • Learning Postgresql (2 weeks+)
  • Exploring how teams think about strategy together (6 months+)

I mean there are more threads than this, but I’d have to spelunk a lot more tabs to cover it fully. Some of my tabs will be over a year old (about when Chrome last lost everything). A tab in this context represents the current, dangling, end of a train of thought.

Some of these trains of thought have been going on for years. Browser sessions are pretty ephemeral and they don’t very well respect my approach. Sometimes the browser dies and takes a hundred or more tabs with it. Sometimes I get them back, sometimes I don’t. Sometimes I end up with a duplicate window and have to weed.

You could say “don’t do this” and maybe you’d be right for you but this is how I am. I’m not going to fight it. That means I need to find a way to adapt the tools to work better for me.

See also: Browser tabs are probably the wrong metaphor

Inserting associated models with Ecto

I’m building a “learning app” using the PETAL stack and it’s taxing some of the grey cells that haven’t worked since I was working with Ruby on Rails many years ago.

It’s hit a point where I need to insert two associated records at the same time. I am sure this involves Ecto.Multi but I’m also trying to understand how to build the form since the form_for expects an Ecto.Changeset and so far as I can see these are schema specific. Or, at least, in all the examples I’ve seen so far they have been.

I make a lot of introductions by email so I decided to build a little helper application to make it easier for me. My schema at the moment is quite simple:

Contact <-> Role <-> Company

At the moment I have a very simple CRUD approach where you create Contacts and Companies separately. But, of course, in practice when I create a Contact I want to create the Company and specify the Role at the same time. And that’s where I run into a problem. The common pattern is something like:

def new(conn, _params) do
  changeset = Introductions.change_contact(%Contact{})
  render(conn, "new.html", changeset: changeset)
end

In this case we are creating an Ecto.Changeset corresponding to a new Contact. Later when we want to build a form to edit the details we have:

<%= form_for @changeset, @action, fn f -> %>

Where the form relates the fields of the schema to the fields in the form.

So the question is how you create a “blended” or “nested” Changeset that can contain the details of each of the 3 schemas at work.

I’ve not seen any examples covering this case. I’m muddling my way through it but it would be great to have something to work from.

Always Future Agents

I’ve been interested in software agents since I came across Graham Glass’ software ‘ObjectSpace Voyager’ in 1998. The idea behind agents is software that can act on its own on behalf of it’s “owner”, much like a human agent in the sports or entertainment field.

If you’ve ever used something like Spotlight, you’ve used a local agent. Spotlight works away in the background indexing the files on your computer so that it can answer questions like “Where did I put that presentation where I mentioned ‘Bitcoin futures’?”

There are quite a few “local agents” that are useful. But what if it’s someone else’s presentation that you are looking for? What if it’s on their laptop? To be truly useful to their owners, agents must be capable of being distributed.

In 1998 Object-Oriented was all the rage, but distributed software was still a mess. If there was a lot of money riding on it, you could use CORBA. I was significantly techy back then, and even I had trouble with CORBA. Java had Remote Procedure Calls (RPC) calls by which objects could message other objects, but the whole edifice of distributed computing was fragile. There was no platform on which you could write distributed communicating agents.

Then along came Voyager. At a stroke, Voyager let you turn a Java object into an agent that could communicate with other agents wherever they were. More amazing still was that an agent running on Voyager on your machine could “hop” to another device and execute there. It took my breath away.

Sadly, it was also useless. Almost nobody else had heard of Voyager or seemed to see its potential. There was nowhere for your agents to go and nothing much for them to do if they got there. I could never see how to make real use of it. I think this reality started to bite because Voyager pivoted and became a good, boring web application server.

But for a brief moment, I saw a beautiful future of agents communicating with each other to help their users solve problems (yes, Tron made a big impression on me as a child)!

Though it faded over the years, I’ve never entirely lost that vision. It sits as an, as yet, unexplored part of Mentat. In Mentat, scripts are a first-class citizen, and I want to make it easy to create agent scripts that perform functions on my behalf. The distribution will be achieved using TupleSpaces (an overlooked concept in distributed computing).

A simple but powerful use-case could be finding answers to questions. Imagine something like this:

  • You pose a question and post it.
  • One of your agents sends the question metadata to one or more shared tuple spaces.
  • My agents are waiting for tuples matching things I am interested in.
  • One of my agents spots your question and, realising it (a) matches my interests and (b) meets my priority requirements, ‘takes’ it.
  • It presents your question to me along with the related resources that I have on hand.
  • I select from among those resources to compose my answer.
  • My agent posts my answer back into the tuple space.
  • Your agent spots an answer and collects it to present it, and potentially others, to you at an appropriate moment.

Sounds a bit like posting a question to a web forum, right? Yes, but the differences have the potential to be transformative.

  • You don’t have to decide where to put your question; your agent can do that. Depending on your preferences, it might put it in many spaces and with different metadata depending on the space.
  • I don’t have to look for your question; my agent decides if it’s something I will want to respond to. Or ignore. Or maybe just file it away for some other purpose.
  • My agent can assemble resources on my behalf to make it easier to answer that question.
  • You don’t have to look for replies; your agent will assemble them. Potentially using a quality filter (oh, Matt replied, you’ll want/not want to see that) and potentially digesting answers. Your agent might just as well say, “You’re busy right now, but I suspect you’ll want to see Matt’s question; I will present it at another time”.

Since our agents are software under our control, we can determine how they work and improve them over time to better use our knowledge and attention.

For example, your agent might not be tasked with trying to answer my questions but simply to reflect, “Matt seems to be interested in topic X right now”. Indeed your agent might notify you not about my questions but with analysis of my questions. This could go in all kinds of directions.

I have many of the pieces of this infrastructure in place but can’t make progress right now. I really wish I could find someone to collaborate with on this platform. Hopefully, I still have a few years to get back to it and turn it into something real that people can use to solve problems.

Sharing our work

When I started working on Mentat back in 2018 I had in mind a kind of “total information store” that I could use for all sorts of things but often about outputs either to questions or in terms of content.

This was a reflection of the blurring of my work & life and the way that information tends to disappear into other peoples silos over time. I am interested in what I am interested in, no matter the context, and I would like to know what I know or at least what I thought at some point.

Meanwhile, Roam Research has come along and hoovered up a lot of use cases. I use Roam as a habitual note-taking environment. A light-weight TODO system, calendar, and personal CRM. I use it for drafting LinkedIn posts, blogs, and newsletters. And I’m a relatively unsophisticated Roam user (for example I’ve never written a query in anger, don’t use Roam/js plugins, and still use the default theme) and yet it has certainly come to dominate my digital life.

At the same time, I can reflect that one of Roam’s great strengths: its focus on blocks of written text (with tags and backlinks) is also its Achilles heel. You can put anything in Roam but structure appears only sporadically and with effort. How can you act upon what Roam knows?

In the context of, say, writing an article it works well. But what if I wanted to see if I could answer a specific question relating things that I know. That could be a lot more tricky.

Mentat comes from the opposite perspective. It deals with structured ‘things’ (indeed the root of the Mentat mental taxonomy is something called Thing). We can have a Thing called Note that represents free text. This is never going to be as powerful as Roam but, at the same time, we know what is a note and what is a instead a Person. Roam can approach this through the use of tagging. I routinely add Tags: #Person to people I add to Roam as part of my CRM but it’s not the same thing.

As yet, Roam provided few tools to act on this and of course it relies upon my consistently tagging people — which often I forget — and applying the same schema over time (mine has changed 3 times as the advice has changed). Again there are solutions to these problems but they are always a compromise of being based upon free text. Mentat has it’s own compromises but, similarly, strengths.

Three things I see as being very important to using a future version of Mentat for work are:

Being able to structure questions with appropriate metadata that allows them to be shared and acted upon by others.

A “shared space” in which questions can be placed and taken.

The ability to create agents that can act on things in the shared space. Taking them, acting upon them, changing things locally and potentially placing things back into the shared space.

Roam is going to have to tackle the problem of people sharing their graphs. That is going to be a hard problem. Mentat will allow people to create shared spaces and exchange information without needing to create a total mapping.

It will be interesting to see if (a) I can build this, (b) if it might work as well as or better (for some problems) as what Roam will come up with.