management work

One of the learnings I have had going from working with machines to working with humans has been to identify correctness and alignment as two separate problems. There are many forms that this learning could be phrased in and I believe there is nothing new in my conception other than, maybe, this particular choice of words. Correctness has obvious interpretations. Alignment is the problem of whether the relevant ecosystem is agreeing on an approach, whether correct or not1.

With machines and [deterministic] programs, alignment usually translates down to correctness. Say you wrote a program that ingests data but it doesn't work with one of the data sources. This is an alignment problem. But it comes around as an error and we programmers are pretty okay with taking errors up for solutions. There is an inherent notion—reinforced by these translations—among us that correctness implies alignment. But this notion breaks dramatically when working with humans.

With human systems you will find it more important to seek alignment in cases where the gap in correctness is not too much since bugs can be solved much easily than the problem of not getting coordination and support of a human or team. Sometimes this translates to values like 'Disagree and Commit' but I believe this dichotomy of correctness and alignment goes beyond.

This is probably a begrudgingly done admittance but you might actually see people just working on alignment leading to more progress at times than someone touching any ounce of correctness.

Footnotes:

1

Note that correctness is not supposed to be always binary.