Summary of GPT-4 Technical Report

I fed the 100 page GPT-4 Technical Report to the Kagi Universal Summarizer and this is what it came up with:

The PDF document discusses the development, capabilities, limitations, and safety challenges of the GPT-4 language model. The main goal of developing such models is to improve their ability to understand and generate natural language text, particularly in complex and nuanced scenarios. GPT-4’s capabilities and limitations create significant safety challenges, and the document emphasizes the importance of studying these challenges due to their potential societal impact.

The document compares GPT-4 with the best state-of-the-art (SOTA) models and highlights the model’s safety improvements. The approach to safety consists of two main components: safety-relevant RLHF training prompts and rule-based reward models (RBRMs). These mitigations have significantly improved many of GPT-4’s safety properties. However, it is essential to complement these improvements with deployment-time safety techniques like monitoring for abuse and a pipeline for fast iterative model improvement.

The document also discusses the safety challenges presented by the model’s limitations (e.g., producing convincing but subtly false text) and capabilities (e.g., increased adeptness at providing illicit advice, performance in dual-use capabilities, and risky emergent behaviors). To understand these risks, over 50 experts were engaged to gain a more robust understanding of the GPT-4 model and potential deployment risks.

The document acknowledges the need for anticipatory planning and governance, as well as the importance of focusing on safety challenges to motivate further work in safety measurement, mitigation, and assurance. It also highlights the need for developers to adopt layers of mitigations throughout the model system, ensure safety assessments cover emergent risks, and engage in collaborative safety research.

In conclusion, the document emphasizes the importance of understanding and addressing the safety challenges posed by the GPT-4 language model, as both its improved capabilities and limitations can have significant implications for the responsible and safe societal adoption of these models.

Key Points:

– GPT-4 is designed to improve the understanding and generation of natural language text, especially in complex and nuanced scenarios.

– The development of GPT-4 presents significant safety challenges due to its capabilities and limitations, necessitating careful study and research to mitigate potential societal impact.

– OpenAI is committed to independent auditing of its technologies and plans to share technical details with third parties to ensure transparency and safety.

– GPT-4’s safety approach consists of two main components: safety-relevant RLHF training prompts and rule-based reward models (RBRMs).

– Mitigations have significantly improved many of GPT-4’s safety properties, but deployment-time safety techniques like monitoring for abuse and fast iterative model improvement are still necessary.

– Microsoft has been a key partner in the development of GPT-4, providing support for model training, infrastructure design, and management, as well as partnering on safe deployment.

– GPT-4’s reward model (RM) training involves collecting comparison data where labelers rank model outputs, and the reward is used to update the policy using the PPO algorithm.

– GPT-4 presents safety challenges due to its limitations (e.g., generating subtly false text) and capabilities (e.g., providing illicit advice, dual-use capabilities, and risky emergent behaviors).

– Anticipatory planning and governance are crucial to address the risks associated with GPT-4, and further work in safety measurement, mitigation, and assurance is needed.

– Developers using GPT-4 should adopt layers of mitigations throughout the model system, ensure safety assessments cover emergent risks, and provide end users with detailed documentation on the system’s capabilities and limitations.

Roll ’em

I needed images representing dice rolls for a side project I have been working on. While I thought I would find something reasonable, all the images I could turn up either didn’t look very good, or had a license I didn’t want to work with, or I couldn’t figure out the license. That left me in a bit of a pickle because I have no graphical talent at all.

It occurred to me to check whether ChatGPT knew how to make SVG graphics, and it turned out that it did! It took me quite a while to get the prompts right, there was a lot of back and forth where I would fix one problem, and ChatGPT would revert some other change. This experience probably deserves its own post, but suffice it to say that while I got what I needed, I might have been better off using Fiverr.

Here is the result:

They’re rendered at 400×400, so they look decently sharp when scaled down to smaller sizes. I’m pretty happy with them. I’ve rendered out a set for each of the CSS colours. Download the whole set here, released under a Creative Commons license.

Is a better Elixir REPL possible?

From the perspective of being an Elixir developer, one of the things I miss about Clojure is it’s REPL and the ability to eval code directly from the editor to the REPL session. For example, when working on a function, I can send the current code to the REPL to redefine the function and then play with it in the REPL. Elixir has a REPL, but you can’t do this.

Clojure has the concept of a “current namespace” (namespace is the nearest Clojure equivalent to Elixirs module). Anything evaluated in the REPL is done so in the context of this namespace. You can switch namespaces using the in-ns function. Elixir isn’t this loose, you can re-evaluate a function you need to specify it’s module, there’s no implicit module context.

def foo(), do: true
> ** (ArgumentError) cannot invoke def/2 outside module

defmodule X
  defn foo(), do: true
> => true

I wonder if it would be possible to implement the concept of a “current module” and then automatically wrap any use of def with defmodule <current_module> do; …code…; end?

Wrapping your Elixir app in a Burrito

Probably not an original title but here I am talking about turning an Elixir command-line tool I built into a native executable.

When I needed to learn Elixir (that we’re using in AgendaScope) I picked a project I’d been thinking about for some time: a tool for building interactive fiction. I liked the problem particularly because it would exercise a lot of ground but be quite self contained.

It was a great learning experience and, along the way, I a couple of tools I’m quite proud of: a parser combinator library, Ergo, and a library for representing multiple files as a single contiguous file, LogicalFile.

So now I have this tool that is actually pretty close to being usable and I start thinking about how other people might test it out and that’s where my use of Elixir strikes me as not so good.

You see Elixir has the notion of an escript which is a script that embeds Elixir so a user doesn’t need Elixir installed. But Elixir depends upon Erlang and does need the Erlang runtime system (ERTS) installed. For the kind of people I am targeting that felt awkward.

Along comes Burrito which creates a native binary for macOS, Linux, and Windows. The binary that automatically installs ERTS for the platform and hides all that stuff in the background.

Some things that I struggled with:

  1. I didn’t spend enough time looking at the example app. Don’t be like me.
  2. When you create the releases() function in mix.exs you need to actually call it from your project() function as releases: releases() this does not magically happen. Without this mix release will function but Burrito won’t do anything. If you’re unfamiliar with mix release as I was you might see what it does as “working” and be quite confused.
  3. In your releases() function the key (which is example_cli_app in their example) should be the app key from your project() function.
  4. If you use MIX_ENV=prod mix release then beware that the binary will not include updated code no matter how much you mix compile unless you also bump the project version.
  5. On macOS the cached deployment is in ~/Library/Application Support/.tinfoil which makes no real sense to me and was pretty hard to find. If you don’t fall foul of (4) above this may not matter to you.

All in all most of my issues were lack of observation on my part. The tool works (although I confess I haven’t been able to test the cross-platform binaries) and is a boon to anyone building command-line tools who wants to keep using their favourite language.

Without Burrito I’d probably be learning Zig or Nim right now.

Many’s the difficult parser

A while back I wrote a parser combinator library in the Elixir language called Ergo. I did it partly as a language learning exercise and partly because I had some things needing parsing that related to AgendaScope (which is also being written in Elixir).

Barring some performance issues, Ergo has actually turned into quite a nice tool for parsing and it’s getting a workout in two projects: one related to AgendaScope and one a fun side-project. But there is a problem I’ve hit that I haven’t quite thought my way around yet. The side project is easier to talk about so let’s use a problem example from there:

@game begin
  title: "The Maltese Owl"
  author: "Matt Mower"

  @place bobs_office begin
    description: """A dimly lit, smoky, office with a battered leather top desk and swivel chair."""
    w: #outer_office
    s: #private_room

  @actor bob_heart begin
    name: "Bob"
    role: #detective


A parser combinator combines simple parsers together to create more complex parsers. A simple parser might be character that parsers things like ‘a’, ‘b’, ‘5’, ‘#’ and so on. Then there are parsers (called combinator parsers) that take other parsers and use a rule to apply them. It’s a bit like higher-order functions.

The sequence parser applies a series of parsers, in turn, and returns :ok if they all match, otherwise :error. The choice takes a list of parsers and returns :ok when one of them matches, otherwise :error. The many parser takes a single parser and applies it repeatedly. When a parser returns :error the input is rewound so that the next parser gets a chance to match on the input. So in the choice each parser gets an opportunity to parse the same input.

Let’s look at an example to parse the above syntax (ignoring whitespace & other issues to keep things clear). It might look something like:

def game() do

def place() do

# actor() will look very similar to place()

def attributes() do

def attribute() do

Within game the choice parser attempts to parse a place and if that fails, rewinds the input, and attempts to match actor instead. The many parser would then repeat this choice over and over so that you could parse an infinite stream of places and actors, in any order.

The many parser can succeed zero or more times. It’s like the * operator in a regular expression. When its child parser returns :error (in our example, when the choice cannot match either a place or actor) it stops. But what does it return?

It turns out that many returns :ok and not :error. That’s because many is greedy and will keep trying to match forever so that not matching at some point is not only expected but required!

If we were parsing:

@game begin
  @place …

  @actor …

  @finally …

We expect many to terminate because @final is not part of the sequence of places & actors. However, if we look at the example at the beginning, we have two attributes defined on bobs_office called n and w.

It turns out that an attribute name must be at least 3 characters long so these are invalid and attributes is consequently returning an :error and triggering the end of the many. But this is a different end than meeting @finally.

In this case the input is rewound and the next parser — literal("end") — gets applied. But it’s getting applied to input that should have been consumed by the place parser!

So the problem, as clearly as I can state it, is how to distinguish between situations in which many receiving an :error from its child parser means “end of the sequence” and when it means “I should have parsed this, but couldn’t because of a user error.”

I’m not sure what an elegant solution is to this problem. My first instinct is to add a “parser” called something like commit that specifies that an error after this point is a user-error.

So, for example:

def place() do

What this means is that if we have correctly parsed the input “@place” we know we are definitely in something that should match. If we can’t get place to match from here then the user has specified an incorrect input and rather than place returning :error it should return something like :fatal as an indicator to future parsers that the input cannot be correctly parsed, rather than attempting to continue parsing on on the wrong inputs.

Is the problem clear? Would this work? Is there something I’ve missed? Perhaps a more elegant solution to this problem?

Text of my letter to Counsillor Alvin Finch, Priestwood and Garth, Bracknell

Dear Alvin Finch,

I have just written to James Sunderland MP about the Conservative government, your governments, Health & Social care bill and what it will do to the NHS. The text of my letter is here:

As a Conservative counsillor I wonder if you are happy with your MP supporting the destruction of the NHS in all but name & clap, and ushering in an era of US style private medical services & insurance?

You may think constituents “don’t mind NHS privatisation” but they haven’t seen the bills yet. Or what it would cost them to insure themselves and their families.

You may think it’s right for private companies to have a role in the NHS, I don’t feel strongly either way, but I do feel strongly that if you remove the duty of the government to provide hospital services that can only be interpreted one way.

Do all you counsillors, your friends & family, have amazing private medical insurance? If not, you might want to worry about what this will do to people you care about.

Perhaps you could have a word with Mr Sunderland and your party colleagues about this and about the possible blowback of enabling profit & corruption off the backs of the sick?

Yours sincerely,

Matthew Mower

Text of my letter to James Sunderland MP about the Health & Social Care Bill

Dear James Sunderland,

I am writing in regard to the governments Health & Social Care bill and in particular some key concerns that have been raised:

1) Removal of statutory duty to arrange provision of secondary (e.g. hospital) medical services.

The only possible reason to want to remove a duty to arrange provision is because you don’t want to arrange provision, i.e. you want to reduce the provision of free healthcare and allow private medical companies & private insurance companies to grow. The Conservative government want to turn people being sick into a profit centre for their mates in private medical. How do we know this?

2) Removal of the obligation for public tendering for NHS services allowing ministers to circumvent normal procurement rules.

So, the VIP lane writ into law. Mates of the Health Secretary, or the Cabinet Secretary, or the Home Secretary or heaven only knows maybe your mates will be able to lobby that they should get a contract. We’ve seen with the PPE scandal how that works out. Billions of tax payer money “spaffed up the wall” by friends of the party regardless of whether they provide a service or whether it’s a shoddy service that they provide.

3) Provision to enable private companies to sit on ICS boards with no maximum representation while local authorities representation is strictly limited. So local people have no voice in how local services are run while the people profiting from them get unlimited say. Great to ensure there are no dissenting voices getting in the way of making a buck.

I could go on.

This bill ends the NHS in all but name & clap. It ushers in US style private health care and bungs to mates of the party.

You said you were happy with your stance on Owen Patterson and letting Tory MP’s police their own standards. Is this okay with you too?

Would you be happy explaining to your constituents your support for ending the NHS.

If not, what will you do? What will you do to safeguard the NHS for your constituents in Bracknell?

Yours sincerely,

Matthew Mower

Text of my letter to James Sunderland MP regarding Owen Patterson / Corruption

Dear James Sunderland,

I am writing to you to express my disgust at your voting to whitewash Owen Patterson’s corrupt record and destroy independent oversight of MPs.

Own Patterson received payments from companies that he lobbied for to get government contracts without bidding. The commissioner has demonstrated that this is corrupt. Perhaps you think that is okay? If so, that says something damning about your moral compass. You and the rest of your ilk have also voted to make it okay for other MP’s to be corrupt. Perhaps you think that is okay too?

And we know why you did this. Boris Johnson. Who runs afraid of his own record being scrutinized. Remember Jennifer Accuri? Because we do. Remember expensive holidays and flat renovations? Because we do. You’re old enough to remember sleaze bringing down a past government. Perhaps you and your fellow MP’s should reflect on that and your chosen “leader”.

I look forward to the next election. You may think of yourself in a safe seat but I am determined to do my bit to remind the people of Bracknell of your record here and the shameful thing you have done to besmirch your good office.

And I am not alone.

I shall also be writing in the same terms to all of the Conservative councillors in Bracknell to ensure they also know what I think of their MP.

Please do me the courtesy of not sending me some pat PR release from Conservative HQ. Either do me the courtesy of replying yourself or wait for the judgement of the ballot box.

Yours sincerely,

Matthew Mower
A disgusted constituent.

What we lost (a paean, perhaps, to RSS)

Recently Dave Winer has been posting thoughts about using tags (some of us old-timers used to call them ‘topics’) in his blog. This is more than a little bit poignant because I have a history here, and Dave started it with Radio Userland and RSS.

Back in 2003, Paolo Valdemarin and I built a product called k-Collector which was an offshoot of a tool, liveTopics, that I had built for my Radio based blog “Curiouser and Curiouser” (the v1.0 of this one).

While the notion of categories was already a feature of blogs & RSS, liveTopics was I think a first in that it allowed a user to add topics to posts, to publish per-topic RSS feeds, and even to create a tagged index of posts. Thanks to the magic of the Wayback Machine you can still see this stuff, even though that blog is long gone.

Then k-Collector took this notion to the next level by connecting together the topic-based feeds of a community of blogs and creating a collective view, based on shared tags. We published a service W4 for a while. Again the magic of the Wayback Machine means you can still see it (the related topics still work and even the topic filters!)

To feed k-Collector we needed a way to transport tags through RSS. So Paolo and I invited Easy News Topics (ENT) as an RSS2.0 module to do the work. Whereas liveTopics only worked with Radio Userland, now anyone could play by simply putting their tags in their RSS feed.

k-Collector was too revolutionary for 2004. Companies did not routinely blog at that time, let alone see the value of their employee’s blogging about their experiences and challenges, didn’t see the value of connecting the dots. Sad that.

Today k-Collector would not be possible. That alternative future where everyone started blogging and putting their content in RSS2.0 feeds that we could analyse to connect those conversations did not happen.

Instead Facebook Workplace, Slack, and a host of other silo’s took hold of the future and, no matter what good they may have done, we’ve all lost out.

Libraries can start processes too!

It’s the kind of thing that, if you are used to other languages, would make you very suspicious. Library code starting its own processes — you can almost feel the mess and sense the bugs just musing over it. Yet in Elixir (and, of course, Erlang too), this is a totally normal thing to do.

The Elixir approach to shared mutable state is wrapping it in a process. In this case, I needed a counter and the easiest way to implement it is to use an Agent which is a kind of process designed to handle simple state. In this case, the get_and_update function allows me to return the counter and increment it as an atomic operation.

To start an Agent you use the Agent.start_link function. But where to call it? Do I have to add some equivalent initializer function to my library? While not exactly onerous it felt awkward somehow like my implementation was leaking into the caller. Then again did I have to stop the agent process somewhere? Where would I do that?

Now I figured out how to manage the life-cycle of the agent process myself within the library. But it turns out to be unnecessary. All I had to was make one change to my mix.exs file and one addition to a module in my library.

def application do
    extra_applications: [:logger]


def application do
    extra_applications: [:logger],
    mod: {Ergo, []}

along with changing the referenced library module, Ergo to look like:

defmodule Ergo
  use Application

  def start(_type, _args) do
    Supervisor.start_link([Ergo.AgentModule], strategy: :one_for_one)


This is enough that any application using my library (called Ergo, btw) knows will automatically start the Agent and manage its life-cycle. Without me, or the calling application, needing to know anything about it at all.

This is a pretty neat trick.