jamesrom a day ago

It's easy to think of notation like shell expansions, that all you're doing is replacing expressions with other expressions.

But it goes much deeper than that. Once my professor explained how many great discoveries are often paired with new notation. That new notation signifies "here's a new way to think about this problem". And that many unsolved problems today will give way to powerful notation.

  • veqq a day ago

    > paired with new notation

    The DSL/language driven approach first creates a notation fitting the problem space directly, then worries about implementing the notation. It's truly empowering. But this is the lisp way. The APL (or Clojure) way is about making your base types truly useful, 100 functions on 1 data structure instead of 10 on 10. So instead of creating a DSL in APL, you design and layout your data very carefully and then everything just falls into place, a bit backwards from the first impression.

    • xelxebar a day ago

      You stole the words from my mouth!

      One of the issues DSLs give me is that the process of using them invariably obsoletes their utility. That is, the process of writing an implementation seems to be synonymous with the process of learning what DSL your problem really needs.

      If you can manage to fluidly update your DSL design along the way, it might work, but in my experience the premature assumptions of initial designs end up getting baked in to so much code that it's really painful to migrate.

      APL, on the other hand, I have found extremely amenable to updates and rewrites. I mean, even just psychologically, it feels way more sensible to rewrite a couple lines of code versus a couple hundred, and in practice, I find the language to be very amenable for quickly exploring a problem domain with code sketches.

      • skydhash a day ago

        I was playing with Uiua, a stack and array programming languages. It was amazing to solve the Advent of Code's problems with just a few lines of code. And as GP said. Once you got the right form of array, the handful of functions the standard library was sufficient.

      • marcosdumay a day ago

        > One of the issues DSLs give me is that the process of using them invariably obsoletes their utility.

        That means your DSL is too specific. It should be targeted at the domain, not at the application.

        But yes, it's very hard to make them general enough to be robust, but specific enough to be productive. It takes a really deep understanding of the domain, but even this is not enough.

        • xelxebar 21 hours ago

          Indeed!

          Another way of putting it is that, in practice, we want the ability to easily iterate and find that perfect DSL, don't you think?

          IMHO, one big source of technical debt is code relying on some faulty semantics. Maybe initial abstractions baked into the codebase were just not quite right, or maybe the target problem changed under our feet, or maybe the interaction of several independent API boundaries turned out to be messy.

          What I was trying to get at above is that APL is pretty great for iteratively refining our knowledge of the target domain and producing working code at the same time. It's just that APL works best when reifying that language down into short APL expressions instead of English words.

      • dayvigo 17 hours ago

        >If you can manage to fluidly update your DSL design along the way, it might work

        Forth and Smalltalks are good for this. Self even more so. Hidden gems.

    • smikhanov a day ago

          APL (or Clojure) way is about making your base types truly useful, 100 functions on 1 data structure instead of 10 on 10
      
      If this is indeed this simple and this obvious, why didn't other languages followed this way?
      • diggan a day ago

        That particular quote is from the "Epigrams on Programming" article by Alan J. Perlis, from 1982. Lots of ideas/"Epigrams" from that list are useful, and many languages have implemented lots of them. But some of them aren't so obvious until you've actually put it into practice. Full list can be found here: https://web.archive.org/web/19990117034445/http://www-pu.inf... (the quote in question is item #9)

        I think most people haven't experienced the whole "100 functions on 1 data structures instead of 10 on 10" thing themselves, so there is no attempts to bring this to other languages, as you're not aware of it to begin with.

        Then the whole static typing hype (that is the current cycle) makes it kind of difficult because static typing kind of tries to force you into the opposite of "1 function you can only use for whatever type you specify in the parameters", although of course traits/interfaces/whatever-your-language-calls-it helps with this somewhat, even if it's still pretty static.

        • fc417fc802 16 hours ago

          > static typing kind of tries to force you into the opposite

          The entire point being to restrict what can be done in order to catch errors. The two things are fundamentally at odds.

          Viewed in that way typed metaprogramming is an attempt to generalize those constraints to the extent possible without doing away with them.

          I would actually expect array languages to play quite well with the latter. A sequence of transformations without a bunch of conditionals in the middle should generally have a predictable output type for a given input type. The primary issue I run into with numpy is the complexity of accounting for type conversions relative to the input type. When you start needing to account for a variable bit width things tend to spiral out of control.

      • electroly 21 hours ago

        "APL is like a diamond. It has a beautiful crystal structure; all of its parts are related in a uniform and elegant way. But if you try to extend this structure in any way - even by adding another diamond - you get an ugly kludge. LISP, on the other hand, is like a ball of mud. You can add any amount of mud to it and it still looks like a ball of mud." -- https://wiki.c2.com/?JoelMosesOnAplAndLisp

      • marcosdumay a day ago

        Because it's domain specific.

        If you push this into every kind of application, you will end-up with people recreating objects with lists of lists, and having good reasons to do so.

      • exe34 a day ago

        some of us think in those terms and daily have to fight those who want 20 different objects, each 5-10 deep in inheritance, to achieve the same thing.

        I wouldn't say 100 functions over one data structure, but e.g. in python I prefer a few data structures like dictionary and array, with 10-30 top level functions that operate over those.

        if your requirements are fixed, it's easy to go nuts and design all kinds of object hierarchies - but if your requirements change a lot, I find it much easier to stay close to the original structure of the data that lives in the many files, and operate on those structures.

        • TuringTest 20 hours ago

          Seeing that diamond metaphor, and then learning how APL sees "operators" as building "functions that are variants of other functions"(1), made me think of currying and higher-order functions in Haskell.

          The high regularity of APL operators, which work the same for all functions, force the developer to represent business logic in different parts of the data structure.

          That was a good approach when it was created; but modern functional programming offers other tools. Creating pipelines from functors, monads, arrows... allow the programmer to move some of that business logic back into generic functions, retaining the generality and capacity of refactoring, without forcing to use the structure of data as meaningful. Modern PL design has built upon those early insights to provide new tools for the same goal.

          (1) https://secwww.jhuapl.edu/techdigest/content/techdigest/pdf/...

          • exe34 19 hours ago

            if I could write haskell and build an android app without having to be an expert in both haskell and low level android sdk/ndk, I'd be happy to learn it properly.

            • fc417fc802 16 hours ago

              Doesn't that "just" require C FFI of the NDK? Sounds like a headache.

  • peralmq a day ago

    Good point. Notation matters in how we explore ideas.

    Reminds me of Richard Feynman. He started inventing his own math notation as a teenager while learning trigonometry. He didn’t like how sine and cosine were written, so he made up his own symbols to simplify the formulas and reduce clutter. Just to make it all more intuitive for him.

    And he never stopped. Later, he invented entirely new ways to think about physics tied to how he expressed himself, like Feynman diagrams (https://en.wikipedia.org/wiki/Feynman_diagram) and slash notation (https://en.wikipedia.org/wiki/Feynman_slash_notation).

    • nonrandomstring a day ago

      > Notation matters in how we explore ideas.

      Indeed, historically. But are we not moving into a society where thought is unwelcome? We build tools to hide underlying notation and structure, not because it affords abstraction but because its "efficient". Is there not a tragedy afoot, by which technology, at its peak, nullifies all its foundations? Those who can do mental formalism, mathematics, code etc, I doubt we will have any place in a future society that values only superficial convenience, the appearance of correctness, and shuns as "slow old throwbacks" those who reason symbolically, "the hard way" (without AI).

      (cue a dozen comments on how "AI actually helps" and amplifies symbolic human thought processes)

      • PaulRobinson a day ago

        Let's think about how an abstraction can be useful, and then redundant.

        Logarithms allow us to simplify a hard problem (multiplying large numbers), into a simpler problem (addition), but the abstraction results in an approximation. It's a good enough approximation for lots of situations, but it's a map, not the territory. You could also solve division, which means you could take decent stabs at powers and roots and voila, once you made that good enough and a bit faster, an engineering and scientific revolution can take place. Marvelous.

        For centuries people produced log tables - some so frustratingly inaccurate that Charles Babbage thought of a machine to automate their calculation - and we had slide rules and we made progress.

        And then a descendant of Babbage's machine arrived - the calculator, or computer - and we didn't need the abstraction any more. We could quickly type 35325 x 948572 and far faster than any log table lookup, be confident that the answer was exactly 33,508,305,900. And a new revolution is born.

        This is the path we're on. You don't need to know how multiplication by hand works in order to be able to do multiplication - you use the tool available to you. For a while we had a tool that helped (roughly), and then we got a better tool thanks to that tool. And we might be about to get a better tool again where instead of doing the maths, the tool can use more impressive models of physics and engineering to help us build things.

        The metaphor I often use is that these tools don't replace people, they just give them better tools. There will always be a place for being able to work from fundamentals, but most people don't need those fundamentals - you don't need to understand the foundations of how calculus was invented to use it, the same way you don't need to build a toaster from scratch to have breakfast, or how to build your car from base materials to get to the mountains at the weekend.

        • ryandv a day ago

          > This is the path we're on. You don't need to know how multiplication by hand works in order to be able to do multiplication - you use the tool available to you.

          What tool exactly are you referring to? If you mean LLMs, I actually view them as a regression with respect to basically every one of the "characteristics of notation" desired by the article. There is a reason mathematics is no longer done with long-form prose and instead uses its own, more economical notation that is sufficiently precise as to even be evaluated and analyzed by computers.

          Natural languages have a lot of ambiguity, and their grammars allow nonsense to be expressed in them ("colorless green ideas sleep furiously"). Moreover two people can read the same word and connect two different senses or ideas to them ("si duo idem faciunt, non est idem").

          Practice with expressing thoughts in formal language is essential for actually patterning your thoughts against the structures of logic. You would not say that someone who is completely ignorant of Nihongo understands Japanese culture, and custom, and manner of expression; similarly, you cannot say that someone ignorant of the language of syllogism and modus tollens actually knows how to reason logically.

          You can, of course, get a translator - and that is what maybe some people think the LLM can do for you, both with Nihongo, and with programming languages or formal mathematics.

          Otherwise, if you already know how to express what you want with sufficient precision, you're going to just express your ideas in the symbolic, formal language itself; you're not going to just randomly throw in some nondeterminism at the end by leaving the output up to the caprice of some statistical model, or allow something to get "lost in translation."

          • TuringTest 21 hours ago

            > If you mean LLMs, I actually view them as a regression with respect to basically every one of the "characteristics of notation" desired by the article.

            LLMs are not used for notation; you are right that they're not precise enough for accurate knowledge.

            What LLMs do as a tool is solving the Frame Problem, allowing the reasoning system to have access to the "common sense" knowledge that is needed for a specific situation, retrieving it from a humongous amount of background corpus of diverse knowledge, in an efficient way.

            Classic AI based on logical inference was never able to achieve this retrieval, thus the unfulfilled promises in the 2000s to have autonomous agents based on ontologies. Those promises seem approachable now thank to the huge statistical databases of all topics stored in compressed LLM models.

            A viable problem-solving system should combine the precision of symbolic reasoning with the breadth of generative models, to create checks and heuristics that guide the autonomous agents to interact with the real world in ways that make sense given the background relevant cultural knowledge.

            https://en.wikipedia.org/wiki/Frame_problem

          • PaulRobinson a day ago

            You need to see the comment I was replying to, in order to understand the context I was making.

            LLMs are part of what I was thinking of, but not the totality.

            We're pretty close to Generative AI - and by that I don't just mean LLMs, but the entire space - being able to use formal notations and abstractions more usefully and correctly, and therefore improve reasoning.

            The comment I was replying to complained about this shifting value away from fundamentals and this being a tragedy. My point is that this is just human progress. It's what we do. You buy a microwave, you don't build one yourself. You use a calculator app on your phone, you don't work out the fundamentals of multiplication and division from first principles when you're working out how to split the bill at dinner.

            I agree with your general take on all of this, but I'd add that AI will get to the point where it can express "thoughts" in formal language, and then provide appropriate tools to get the job done, and that's fine.

            I might not understand Japanese culture without knowledge of Nihongo, but if I'm trying to get across Tokyo in rush hour traffic and don't know how to, do I need to understand Japanese culture, or do I need a tool to help me get my objective done?

            If I care deeply about understanding Japanese culture, I will want to dive deep. And I should. But for many people, that's not their thing, and we can't all dive deep on everything, so having tools that do that for us better than existing tools is useful. That's my point: abstractions and tools allow people to get stuff done that ultimately leads to better tools and better abstractions, and so on. Complaining that people don't have a first principle grasp of everything isn't useful.

      • WJW a day ago

        > But are we not moving into a society where thought is unwelcome?

        Not really, no. If anything clear thinking and insight will give an even bigger advantage in a society with pervasive LLM usage. Good prompts don't write themselves.

  • agumonkey a day ago

    There's something about economy of thought and ergonomics.. on a smaller scale, when coffeescript popped up, it radically altered how i wrote javascript, because lambda shorthand and all syntactic conveniences. Made it easier to think, read and rewrite.

    Same goes for sml/haskell and lisps (at least to me)

  • mac9 a day ago

    [dead]

FilosofumRex a day ago

Historically, speaking what killed off APL (besides the wonky keyboard), was Lotus 123 by IBM and shortly thereafter MS Excel. Engineers, academicians, accountants, and MBAs needed something better than their TI-59 & HP-12C. But the CS community was obsessing about symbolics, AI and LISP, so the industry stepped in...

This was a very unfortunate coincidence, because APL could have had much bigger impact and solve far more problems than spreadsheets ever will.

  • getnormality 15 hours ago

    > APL could have had much bigger impact and solve far more problems than spreadsheets ever will.

    APL is a symbolic language that is very unlike any other language anyone learns during their normal education. I think that really limits adoption compared to spreadsheets.

  • cess11 a day ago

    As I understand it, Dyalog gives away their compiler, until you put it in production. You can do all your problem solving in it without giving them any money, unless you also put the compiled result in front of your paying customers. If your solution fits a certain subset you can go full bananas and copy it into April and serve from Common Lisp.

    The thing is, that APL people are generally very academic. They can absolutely perform engineering tasks very fast and with concise code, but in some hypothetical average software shop, if you start talking about function ranking and Naperian functors your coworkers are going to suspect you might need medical attention. The product manager will quietly pull out their notes about you and start thinking about the cost of replacing you.

    This is for several reasons, but the most important one is that the bulk of software development is about inventing a technical somewhat formal language that represents how the customer-users talk and think, and you can't really do that in the Iverson languages. It's easy in Java, which for a long time forced you to tell every method exactly which business words can go in and come out of them. The exampleMethod combines CustomerConceptNo127 from org.customer.marketing and CustomerConceptNo211 from org.customer.financial and results in a CustomerConceptNo3 that the CEO wants to look at regularly.

    Can't really do that as easily in APL. You can name data and functions, sure, but once you introduce long winded names and namespaced structuring to map over a foreign organisation into your Iverson code you lose the tersity and elegance. Even in exceptionally sophisticated type systems in the ML family you'll find that developers struggle to do such direct connections between an invented quasilinguistic ontology and an organisation and its processes, and more regularly opt for mathematical or otherwise academic concepts.

    It can work in some settings, but you'll need people that can do both the theoretical stuff and keep in mind how it translates to the customer's world, and usually it's good enough to have people that can only do the latter part.

    • xelxebar a day ago

      > Can't really do that as easily in APL.

      This doesn't match my experience at all. I present you part of a formal language over an AST, no cover functions in sight:

          p⍪←i ⋄ t k n pos end(⊣⍪I)←⊂i            ⍝ node insertion
          i←i[⍋p[i←⍸(t[p]=Z)∧p≠⍳≢p]]              ⍝ select sibling groups
          msk←~t[p]∊F G T ⋄ rz←p I@{msk[⍵]}⍣≡⍳≢p  ⍝ associate lexical boundaries
          (n∊-sym⍳,¨'⎕⍞')∧(≠p)<{⍵∨⍵[p]}⍣≡(t∊E B)  ⍝ find expressions tainted by user input
      
      These are all cribbed from the Co-dfns[0] compiler and related musings. The key insight here is that what would be API functions or DSL words are just APL expressions on carefully designed data. To pull this off, all the design work that would go into creating an API goes into designing said data to make such expressions possible.

      In fact, when you see the above in real code, they are all variations on the theme, tailored to the specific needs of the immediate sub-problem. As library functions, such needs tend to accrete functions and function parameters into our library methods over time, making them harder to understand and visually noisier in the code.

      To my eyes, the crux is that our formal language is _discovered_ not handed down from God. As I'm sure you're excruciatingly aware, that discovery process means we benefit from the flexibility to quickly iterate on the _entire architecture_ of our code, otherwise we end up with baked-in obsolete assumptions and the corresponding piles of workarounds.

      In my experience, the Iversonian languages provide architectural expressability and iterability _par excellence_.

      [0]:https://github.com/Co-dfns/Co-dfns/tree/master

    • skydhash a day ago

      Java, C# are good for these kind of situation where you want to imitate the business jargon, but in a technical form. But programming languages like CL, clojure, and APL have a more elegant and flexible way to describe the same solution. And in the end easier to adapt. Because in the end, the business jargon is very flexible (business objectives and policies is likely to change next quarter). And in Java, rewriting means changing a lot of line of code (easier with the IDE).

      The data rarely changes, but you have to put a name on it, and those names are dependent on policies. That's the issue most standard programming languages. In functional and APL, you don't name your data, you just document its shape[0]. Then when your policies are known, you just write them using the functions that can act on each data type (lists, set, hash, primitives, functions,...). Policy changes just means a little bit of reshuffling.

      [0]: In the parent example, CustomerConceptNo{127,211,3) are the same data, but with various transformations applied and with different methods to use. In functional languages, you will only have a customer data blob (probably coming from some DB). Then a chain of functions that would pipe out CustomerConceptNo{127,211,3) form when they are are actually need (generally in the interface. But they be composed of the same data structures that the original blob have, so all your base functions do not automatically becomes obsolete.

    • buescher a day ago

      You left out that exampleMethod will of course belong to a Conway's law CustomerConceptManager object. I think this is one of the reasons that software as a field has taken off so well in recent decades while more conventional physical-stuff engineering has stagnated (outside of lithography to support... software) - you can map bureaucracy onto software and the bureaucratic mindset of "if I think I should be able to do something, I should just be able to tell people to figure out how" has fewer hard limits in software.

talkingtab a day ago

The base concept is related to other useful ones.

The Sapir-Whorf hypothesis is similar. I find it most interesting when you turn it upside down - in any less than perfect language there are things that you either cannot think about or are difficult to think about. Are there things that we cannot express and cannot think about in our language?

And the terms "language" and "thought" can be broader than our usual usage. For example do the rules of social interaction determine how we interact? Zeynep Tufekci in "Twitter and Teargas" talks about how twitter affords flash mobs, but not lasting social change.

Do social mechanism like "following" someone or "commenting" or "liking" determine/afford us ways of interacting with each other? Would other mechanisms afford of better collective thinking. Comments below. And be sure to like and follow. :-)

And then there is music. Not the notation, but does music express something that cannot be well expressed in other ways?

  • myflash13 18 hours ago

    > in any less than perfect language there are things that you either cannot think about or are difficult to think about.

    Not they they are difficult to think about, rather, it would have never occurred to you to think about it in the first place. As a person who learned multiple foreign languages, there many things that I can think about only in a certain language but not my native language (English). For example there are many meanings of “гулять” in Ukrainian and Russian that are not captured in English, and I never thought about those meanings before I learned Ukrainian and Russian.

    Гулять literally means “to walk” but it is used to mean so much more than that. It also means to seek experiences, including sexual experiences. A person might complain they got married too early because “не нагулялся” or “didn’t walk enough”. While similar English expressions like “sow his wild oats” are used in English, it affects my thoughts differently to have so much meaning in the verb “to walk”. It literally changes how I think about walking through life.

    Similarly when I learned Arabic, there are many meanings and thoughts that I only have in that language and would take entire essays to explain in English. Not because it can’t be explained in English (it can) but the notation is just not there to do it succinctly enough.

  • patcon a day ago

    I love your comment! I can tell that you're a tangential referential thinker, and I assume the downvotes are more from people frustrated that the bounds of the thought-space are being fuzzed in ways frustrating to their minds.

    Metaphor and analogy are in a similar spirit to what you are speaking of. The thinking they ground offer a frustrating inability to contain. Some people love to travel through language into other thoughts, and some are paralysed by that prospect.

    As always, success is in the balance and bothness :)

    Anything, thanks for sharing!

mathgradthrow a day ago

Despite this being very obviously true to any mathematician or computer scientist, this idea is incredibly controversial among linguists and "educators".

The linguistic analogue, (although the exact example of notation does solidly fit into the domain of linguistics) is the so called sapir-whorf hypothesis, which asserts that what language you learn determines how you think.

Because natural languages are cultural objects, and mapping cultures into even a weak partial order (like how a person thinks) is simply verboten in academia.

This has profound consequences in education too, where students are disallowed from learning notation that would allow them to genuinely reason about the problems that they encounter. I admit to not understanding this one.

  • andoando a day ago

    Sapir whorf doesnt negate the possibility of forming the same ideas using more primitive constructs.

    The ability of any language speaker being able to learn the same mathematics or computer program goes to show that.

    Id contest that spoken/written language is even necessary for thinking. At the very least, there is a large corpus of thought which does not require it (at some point humans spoke no or very little words, and its their thought/intention to communicate that drove the formation of words/language), so its silly to me to think of learned language as some base model of thought.

    • mathgradthrow 20 hours ago

      When you learn another language (mathematics, programming), the sapir whorf hypothesis no longer makes the same predictions.

  • taeric 20 hours ago

    I've had arguments on the sapir-whorf idea before. Sucks, as I'm not familiar with the environment that it originated, but it seems that people seemed to have taken an encoding idea and expanded it to experience, writ large.

    That is, people will lay claim that some societies that have the same word for the color of the sea and the color of grass to indicate that they don't experience a difference between the two. Not just that they encode the experiences into memories similarly, but that they don't see the differences.

    You get similar when people talk about how people don't hear the sounds that aren't used by their language. The idea is that the unused sounds are literally not heard.

    Is that genuinely what people push with those ideas?

    The argument of notation, as here, is more that vocabulary can be used to explore. Instead of saying you heard some sound, you heard music. Specific chord progressions and such.

    • 7thaccount 19 hours ago

      I think they push the opposite...mainly a weak Sapir Whorf, but not a strong one. You still have a human brain after all. Do people in Spain think of a table as inherently having feminine qualities because their language is gendered? Probably to some very small amount.

      There is a linguist claiming a stronger version after translating and working with the piriue (not spelling that right) people. Chomsky refuses to believe it, but they can't falsify the guy's claims until someone else goes and verifies. That's what I read anyway.

      Edit: Piraha people and language

      https://en.m.wikipedia.org/wiki/Pirah%C3%A3_language

      • taeric 19 hours ago

        Funny, as I was seeing a lot of people try and push that the gender of language is fully separate from gender of sex. The idea being that there was no real connection between them. I always find this a tough conversation because of how English is not a heavily gendered language. I have no idea how much gender actually enters thinking in the languages that we say are gendered.

        My favorite example of this used to be my kids talking about our chickens. Trying to get them to use feminine pronouns for the animals is basically a losing game. That cats are still coded as primarily female, despite us never having a female cat; is largely evidence to me that something else is going on there.

        I'm curious if you have reading on the last point. Can't promise to get to it soon, but I am interested in the ideas.

        • 7thaccount 19 hours ago

          I edited my previous comment. See here:

          https://en.m.wikipedia.org/wiki/Pirah%C3%A3_language

          There are also a ton of videos and blog posts on the subject.

          • taeric 18 hours ago

            Thanks! I should have been clear on the request, too. I'm curious if there are any good high density reads, as well as if there are some to avoid. It can be easy to fall into poor sources that over claim on what they are talking about. And I don't mean nefariously. Often enthusiasm is its own form of trap.

            • 7thaccount 18 hours ago

              Ah I gotcha. Sorry, it's just something I've skimmed at the surface level. The videos do refer to the key researchers, do you could look up their papers. I'm not sure what else would make sense.

        • skydhash 16 hours ago

          > I have no idea how much gender actually enters thinking in the languages that we say are gendered.

          Not much. It's mostly inference rules, just like in English use of pronouns. I's just more pervasive, pertaining to verbs, adjectives, etc... If we're talking about gendered living organism, then it's just a marker for that binary classification. Anything else, it's just baggage attached to the word.

Duanemclemore 19 hours ago

I am currently developing a project in APL - it was in my backlog for a long time but I'm actually writing code now.

I phrase it that way because there was a pretty long lag between when I got interested and when I was able to start using it in more than one-liners.

But I discovered this paper early in that process and devoured it. The concepts are completely foundational to my thinking now.

Saying all this to lead to the fact that I actually teach NAATOT in an architecture program. Not software architecture - building architecture. I have edited a version to hit Iverson's key points, kept in just enough of the actual math / programming to illustrate these points and challenge the students to try to think differently about the possibilities for the design and representation tools they use, eg their processes for forming ideas, their ways of expressing them to themselves and others, and so forth.

If I had the chance (a more loose, open-ended program rather than an undergraduate professional program) one of my dreams is to run a course where students would make their own symbolic and notational systems... (specifically applied to the domain of designing architecture - this is already pretty well covered in CS, graphic design, data science, etc.)

jweir a day ago

After years of looking at APL as some sort of magic I spent sometime earlier this year to learn it. It is amazing how much code you can fit into a tweet using APL. Fun but hard for me to write.

  • fc417fc802 a day ago

    It's not as extreme but I feel similarly every time I write dense numpy code. Afterwards I almost invariably have the thought "it took me how long to write just that?" and start thinking I ought to have used a different tool.

    For some reason the reality is unintuitive to me - that the other tools would have taken me far longer. All the stuff that feels difficult and like it's just eating up time is actually me being forced to work out the problem specification in a more condensed manner.

    I think it's like climbing a steeper but much shorter path. It feels like more work but it's actually less. (The point of my rambling here is that I probably ought to learn APL and use it instead.)

    • skruger a day ago

      Should you ever decide to take that leap, maybe start here:

      https://xpqz.github.io/learnapl

      (disclosure: author)

      • sitkack a day ago

        I have been reading through your site, working on an APL DSL in Lisp. Excellent work! Thank you.

      • Duanemclemore 19 hours ago

        Stefan, I tried many different entry points to actually start learning APL. (I have a project which is perfectly fit to the array paradigm).

        Yours is by far the best. Thank you for it.

        • skruger 2 hours ago

          Thanks for saying that. I will update it at some stage for the new array notation stuff.

      • -__---____-ZXyw 20 hours ago

        Looks wonderful, thanks for sharing your work!

    • jonahx a day ago

      Indeed numpy is essentially just an APL/J with more verbose and less elegant syntax. The core paradigm is very similar, and numpy was directly inspired by the APLs.

      • bbminner a day ago

        I don't know APL, but that has been my thought as well - if APL does not offer much over numpy, I'd argue that the I'd argue that later is much easier to read and reason through.

        • jonahx 19 hours ago

          If you acquire fluency in APL -- which granted takes more time than acquiring fluency in numpy -- numpy will feel bloated and ungainly. With that said, it's mostly an aesthetic difference and there are plenty of practical advantages to numpy (the main one being there is no barrier to entry, and pretty much everyone already knows python).

        • skydhash a day ago

          I thought that too, but after a while the symbols becomes recognizable (just like math symbols) and then it's a pleasure to write if you have completion based on their name (Uiua developer experience with Emacs). The issue with numpy is the intermediate variables you have to use due to using Python.

    • novosel a day ago

      >Afterwards I almost invariably have the thought "it took me how long to write just that?" and start thinking I ought to have used a different tool.

      I think there is also a psychological bias, we feel more "productive" in a more verbose language. Subconsciously at least, we think "programmers produce code" instead of thinking "programmers build systems".

    • xelxebar a day ago

      > All the stuff that feels difficult and like it's just eating up time is actually me being forced to work out the problem specification in a more condensed manner.

      Very well put!

      Your experience aligns with mine as well. In APL, the sheer austerity of architecture means we can't spend time on boilerplate and are forced to immediately confront core domain concerns.

      Working that way has gotten me to see code as a direct extension of business, organizational, and market issues. I feel like this has made me much more valuable at work.

  • pinkamp a day ago

    Any examples you can share?

InfinityByTen a day ago

> Nevertheless, mathematical notation has serious deficiencies. In particular, it lacks universality, and must be interpreted differently according to the topic, according to the author, and even according to the immediate context.

I personally disagree to the premise of this paper.

I think notation that is separated from visualization and ergonomics of the problem has a high cost. Some academics prefer a notation that hides away a lot of the complexity which can potentially result in "Eureka" realizations, wild equivalences and the like. In some cases, however, it can be obfuscating and be prone to introducing errors. Yet, it's a important tool in communicating a train of thought.

In my opinion, having one standard notation for any domain/ closely related domains is quite stifling of creative, artistic or explorative side of reasoning and problem solving.

Also, here's an excellent exposition about notation by none other than Terry Tao https://news.ycombinator.com/item?id=23911903

  • tossandthrow a day ago

    This feels like the types programming vs. none typed programming.

    There are efforts in math to build "enterprise" reasoning systems. For these it makes sense to have a universal notation system (Lean, Coq, the likes).

    But for a personal exploration, it might be better to just jam in whatever.

    My personal strife in this space is more on teaching: Taking algebra classes, etc. where the teacher is not consistent nor honest about the personal decision and preference they have on notation - I became significantly better at math when I started studying type theory and theory of mechanical proofs.

    • InfinityByTen a day ago

      I have to admit that consistency and clarity of thought are often not implied just by the choice of notation and have not seen many books and professors putting effort to emphasize on its importance or even introducing it formally. I've seen cases where people use fancy notation to document topics than how they think about it. It drives me nuts, because the way you tell the story, you hide a lot how you arrived there.

      This is why I picked so well on the exposition by Terry Tao. It shows how much clarity of thought he has that he understands the importance of notation.

  • boxed a day ago

    The problem the article is talking about is that those different notations are used for super basic stuff that really do not need any of that.

xelxebar a day ago

> Subordination of detail

The paper doesn't really explore this concept well, IMHO. However, after a lot of time reading and writing APL applications, I have found that it points at a way of managing complexity radically different from abstraction.

We're inundated with abstraction barriers: APIs, libraries, modules, packages, interfaces, you name it. Consequences of this approach are almost cliché at this point—dizzyingly high abstraction towers, developers as just API-gluers, disconnect from underlying hardware, challenging to reason about performance, _etc._

APL makes it really convenient to take a different tack. Instead of designing abstractions, we can carefully design our data to be easily operated on with simple expressions. Where you would normally see a library function or DSL term, this approach just uses primitives directly:

For example, we can create a hash map of vector values and interred keys with something like

    str←(⊂'') 'rubber' 'baby' 'buggy' 'bumpers'             ⍝ string table
    k←4 1 2 2 4 3 4 3 4 4                                   ⍝ keys
    v←0.26 0.87 0.34 0.69 0.72 0.81 0.056 0.047 0.075 0.49  ⍝ values
Standard operations are then immediately accessible:

    k v⍪←↓⍉↑(2 0.33)(2 0.01)(3 0.92)  ⍝ insert values
    k{str[⍺] ⍵}⌸v                     ⍝ pretty print
    k v⌿⍨←⊂k≠str⍳⊂'buggy'             ⍝ deletion
What I find really nice about this approach is that each expression is no longer a black box, making it really natural to customize expressions for specific needs. For example, insertion in a hashmap would normally need to have code for potentially adding a new key, but above we're making use of a common invariant that we only need to append values to existing keys.

If this were a library API, there would either be an unused code path here, lots of variants on the insertion function, or some sophisticated type inference to do dead code elimination. Those approaches end up leaking non-domain concerns into our codebase. But, by subordinating detail instead of hiding it, we give ourselves access to as much domain-specific detail as necessary, while letting the non-relevant detail sit silently in the background until needed.

Of course, doing things like this in APL ends up demanding a lot of familiarity with the APL expressions, but honestly, I don't think that ends up being much more work than deeply learning the Python ecosystem or anything equivalent. In practice, the individual APL symbols really do fade into the background and you start seeing semantically meaningful phrases instead, similar to how we read English words and phrases atomically and not one letter at a time.

  • jonahx a day ago

    To rephrase crudely: "inline everything".

    This is infeasible in most languages, but if your language and concise and expressive enough, it becomes possible again to a large degree.

    I always think about how Arthur Whitney just really hates scrolling. Let alone 20 open files and chains of "jump to definition". When the whole program fits on page, all that vanishes. You navigate with eye movements.

    • dayvigo 16 hours ago

      > To rephrase crudely: "inline everything".

      Sounds a lot like what Forth does.

  • DHRicoF a day ago

    > k v⍪←↓⍉↑(2 0.33)(2 0.01)(3 0.92) ⍝ insert values > k{str[⍺] ⍵}⌸v ⍝ pretty print > k v⌿⍨←⊂k≠str⍳⊂'buggy' ⍝ deletion

    I like your funny words. No, really, I should expend some time learning APL.

    But your idea deeply resonate with my last weeks struggle.

    I have a legacy python code with too much coupling, and every prior attempt to "improve things" went adding more abstraction over a plain wrong data model.

    You can't infer, reading the code linearly, what methods mutate their input objects. Some do, some don't. Sometimes the same input argument is returned even without mutation.

    I would prefer some magic string that could be analyzed and understood than this sea of indirection with factories returning different calculators that in some instances they don't even share the same interface.

    Sorry for the rant.

  • rak1507 17 hours ago

    The problem is that this isn't a "hashmap" in any meaningful sense, because all the operations are O(n).

    • xelxebar 13 hours ago

      Hey, I know you.

      > all the operations are O(n)

      Not true, fortunately!

      The cat-gets pattern for insertion is clearly just an O(1) append. Similar for the replicate-gets in the deletion.

      Finding the deletion mask might be O(n) or O(log n), depending[0] on the size of your search space.

      Key is effectively just a radix sort, which is indeed O(n) on the keys, but a trad hashmap doesn't get any better in this case.

      [0]:https://help.dyalog.com/19.0/#Language/Defined%20Functions%2...

      • rak1507 12 hours ago

        The append is the only thing that is O(1), finding the deletion mask is linear (≠ is linear, isn't it?) and the actual deletion is also linear (⌿ is also linear).

               ]runtime -repeat=1s 'v←k1 ⋄ v⌿⍨←v≠2'
        
         \* Benchmarking "v←k1 ⋄ v⌿⍨←v≠2", repeat=1s
         ┌──────────┬──────────────┐
         │          │(ms)          │
         ├──────────┼──────────────┤
         │CPU (avg):│0.008491847826│
         ├──────────┼──────────────┤
         │Elapsed:  │0.008466372283│
         └──────────┴──────────────┘
               ]runtime -repeat=1s 'v←k2 ⋄ v⌿⍨←v≠2'
        
         \* Benchmarking "v←k2 ⋄ v⌿⍨←v≠2", repeat=1s
         ┌──────────┬────────────┐
         │          │(ms)        │
         ├──────────┼────────────┤
         │CPU (avg):│0.8333333333│
         ├──────────┼────────────┤
         │Elapsed:  │0.83        │
         └──────────┴────────────┘
  • TuringTest 19 hours ago

    > APL makes it really convenient to take a different tack. Instead of designing abstractions, we can carefully design our data to be easily operated on with simple expressions. Where you would normally see a library function or DSL term, this approach just uses primitives directly

    But in doing that, aren't you simply moving the complexity of the abstractions from the functions code into the data structure? Sure you can use generic operators now, but then you need to carefully understand what the value pairs represent in terms of domain logic and how to properly maintain the correct structure in every operation. Someone reading the program for the same time will have just the same difficulties understanding what the code does, not in terms of primitives, but in terms of business domain meanings.

    I mean there is an improvement in your approach, but I don't think it comes from putting the complexity at a different place, but because this way you get to see the code and the actual actual values at the same time.

    I have the insight that what makes complex programming easy is this juxtaposition of runtime data and code operations visible together. That's why IDE tools have building better and better debuggers and inspectors to let you see what the program is doing at each step.

    In that context, creating new good concise abstractions is a good thing, wether you abstract parts of the operations or parts of the data structure.

    • xelxebar 13 hours ago

      You're absolutely right that this approach encodes domain complexity in the data, and I agree that's not the magic sauce per se.

      Honestly, it's just really hard to convey how simple code can be. Imagine a nerual net inference engine in 30 straightforward lines of code and no libraries! [0] On one page you really can read off the overall algorithm, tradeoffs made, algorithmic complexity, and intended highlights.

      Abstractions encourage hiding of information whereas subordination encourages keeping it implicit.

      Are you familiar with Observability 2 and wide events? In this data-first model we effectively encode all program behavior into a queryable data structure. I'll often breakpoint something I'm working on and iteratively drill down into the data to get to the root of a problem. That's just so friggin cool, IMHO.

      Abstraction almost always manages complexity by introducing some control flow barrier like an API call. Observing said control flow is a lot more indirect, though, either via statistical sampling or in flight observation. In contrast, imagine if you could simply throw SQL queries at your application and you get an idea of what I'm stumbling to convey here.

      [0]:https://doi.org/10.1145/3589246.3595371

    • skydhash 15 hours ago

      > But in doing that, aren't you simply moving the complexity of the abstractions from the functions code into the data structure?

      The core idea behind most of functional programming and APL is that data don't change. Policies do. So you design a set of primitives that can represent any data types by combination, then build a standard library that combine and split them, as well as transform them into each other. Then you have everything you need for expressing policies.

      Let's take music notation as an analogy. You only need the staff for pitch and the note-head for duration. And with those (plus other extra symbols), you can note down most music. You can move higher up with chords, but you don't really need to go much higher than that.

      Data are always external to the business domains. Only policies are internal. Those policies need semantics on data, so we have additional policies that takes rough data and transform them into meaningful one. What Java, C#, and other do is fixing those semantics in the code and have the code revolve around them. But those labels are in fact transient in either the short or long term. What you really need is a good way to write policies and reasons about them.

      I like Go because it lets you define data semantics well, but don't confine your policies.

      > Someone reading the program for the same time will have just the same difficulties understanding what the code does, not in terms of primitives, but in terms of business domain meanings.

      That's what naming and namespacing is there for. Also docstrings.

  • jodrellblank 15 hours ago

    > "immediately accessible"

    Most of your 'insert values' code is wrangling the data from what the interpreter lets you type into the form you need it; ⍪← do the appending which is what you want, ↓⍉↑()()() are input parsing and transform which you don't care about but have to do to get around APL limitations and the interpreter's input parsing limitations. 9/11ths of the symbols in that line are APL boilerplate. 81% noise.

    Deletion with k v⌿⍨←⊂k≠str⍳⊂'buggy' too; enclosing 'buggy' for index to search for it as a single element in a nested array, keeping in mind that (⍴'a')≠(⍴'aa') in case you had a single character key. Making a Boolean array which is nothing to do with your problem domain - it's what you have to do to deal with APL not offering you the thing you want.

    Saying "we can create a hash map of vector values" is misleading because there isn't any hashing going on. Your code doesn't check for duplicate keys. You can't choose the hash, can't tune the hash for speed vs. random distribution. The keys are appended in order which makes searching slower than a sorted array (and you can't sort array k without messing up its links to array v, same with using nub ∪ on it to make it a set) and at scale the array search becomes slow - even with Dyalog APL's bit-packing and SIMD accelerated searches behind the scenes - so there is a magic interpreter command (I-Beam 1500)[1] which marks an array for internal hashing for faster lookups. But remember the leaky internal abstractions because ⍪← isn't really appending; APL's model is nearly immutable, the array is streamed through and updated into a new array, so the hash has to work with that and you need to understand it and keep it in mind for the times it won't work and some code will invalidate the hash and end up slower.

    ----

    Good ideas of language and tooling design such as "pit of success", "there's only one way to do it", "the first way that comes to mind should be the right way to do it", "different work should look different" are all missing from APL; code that looks workable often has some small internal detail why it isn't; there's no obvious way to do common things. Compare it with Python or C# or similar languages which have syntax for kv={'a':1, 'b':2} and it "just works", if you miss parens or miss a colon it looks clearly wrong, throws editor squigglie underlines, and gives compile time errors. There's many fewer trivial typos which look plausible and run error-free but screw things up. Make a mistake and get an error, and the error won't be LENGTH ERROR or DOMAIN ERROR it will be something mostly helpful. The map behaves how one would expect - deleting doesn't stream the entire dictionary through memory and make a new copy; there's no need to manually link indices between different arrays; failed key lookup gives null or helpful error, the key can be any reasonable data type in the language, etc.

    APL implementations are reliant on magic interpreter functions for IO (⎕NGET, ⎕CSV, ⎕JSON and friends). Error handling is bad, logging is bad, debugging is bad, because an entire expression is executed as one thing - and with hooks and forks, one thing which can't easily be split into smaller pieces. e.g. if ↓⍉↑ is a valid transform that returns some data but it's not the transform you want, how do you find out why not? How do you find out what it is doing instead of what you thought it was doing? How do you tell that k v⌿⍨←⊂... needed that enclose on the boolean matrix? Trying without the enclose gets LENGTH ERROR, then what? The language and tooling give you minimal to zero help with any of this, you're left with experimentation - and that sucks because you can't actually type in the 2D arrays you want to work with and the code doesn't work for some mismatch of shape/nesting/enclosing/primitives that you don't understand, so experiments to understand what's happening don't work for the exact same reasons that you can't do the experiments for reasons you don't understand.

    One needs a precise feel for all the different array shapes, why ⊂3 doesn't seem to do anything, why ⊆ and ⊂ are different, what things are a 1D array and what are a 2D array with one row or a boxed 1D array or a nested 1D array as a scalar inside a 2D array, or which things are working by scalar extension[3] before one can do much of anything including experimenting and learning how to do much of anything.

    Where is the 'immediately accessible' pattern "if a key is in the dictionary, update it, if it's not there, add it"? In Python it's if/else and "key in map". In C# it's if/else and "map.Contains(key)". In your APL design we start with the deletion code and enclose the key, search for it in str to get an index-of and hope it only comes up once, then immediately we find branching is difficult. Do we use ⍣ with a boolean arg to choose whether it runs or not? Do we always append but do a boolean compress on the data to append something-or-nothing? Do we use dfns and guards? Do we switch to a trad-fn and use :if/:else? Do we make an array of functions and index which one to execute, or an array of names and index which one to eval ⍎? There's no immediate answer so instead of skipping over this trivial non-issue in any other language and on with the problem, we descend into "how to reimplement basic thing in APL" for branching.

    > "What I find really nice about this approach is that each expression is no longer a black box, making it really natural to customize expressions for specific needs."

    It's an argument Aaron Hsu has made, but it's like using Up-Goer 5 or speaking Toki Pona where you can't say "fire engine" but can say "car of man with job to stop fire".[4]

    [1] https://docs.dyalog.com/latest/CheatSheet%20-%20I-Beams.pdf

    [3] https://aplwiki.com/wiki/Scalar_extension

    [4] https://xkcd.com/1133/

    • xelxebar 11 hours ago

      Man, you bring up so many good, juicy points. I wish we could sit down and chat about them!

      First off, just to be clear, I totally understand the friction points you bring up. APL's learning curve is crazy steep, and learning materials beyond the beginner level are nigh on nonexistent. It sucks, I agree.

      That said, if you're able to push through the abyss, I can assure you that most of that friction drops away. It's clear that you're still thinking in terms of individual primitive operations, but with familiarity thing like k v⌿⍨← will just read as atomic operations in the same way that you read "laser" not "l-a-s-e-r".

      More than that, though, the if/else struggle you mention really brings back memories. In practice it's really not an issue. We just ⍪←∪ to insert potentially new keys, str←str[x←⍋str] ⋄ k v⊇←⊂x if a sort is needed, etc. But we only do those when the operations on the data demand it. Otherwise there's no point in encoding irrelevant extra structure in the data.

      It took me a good couple years of regular APL hacking for things to click, but I'm not really sure how to get across the feeling, so you'll just have to believe me or not, I guess.

dang 17 hours ago

Related. Others?

Notation as a tool of thought (1979, APL founder Iverson) [pdf] - https://news.ycombinator.com/item?id=35131720 - March 2023 (1 comment)

Notation as a Tool of Thought (1979) - https://news.ycombinator.com/item?id=32178291 - July 2022 (62 comments)

Notation as a Tool of Thought (1979) [pdf] - https://news.ycombinator.com/item?id=28937268 - Oct 2021 (65 comments)

Notation as a Tool of Thought - https://news.ycombinator.com/item?id=25249563 - Nov 2020 (43 comments)

J Notation as a Tool of Thought - https://news.ycombinator.com/item?id=24166260 - Aug 2020 (59 comments) (<-- different article that snuck in, but I don't want to kick it out)

Notation as a Tool of Thought (1979) - https://news.ycombinator.com/item?id=18188685 - Oct 2018 (5 comments)

Notation as a Tool of Thought (1979) [pdf] - https://news.ycombinator.com/item?id=16842378 - April 2018 (13 comments)

Notation as a Tool of Thought (1979) - https://news.ycombinator.com/item?id=13199880 - Dec 2016 (25 comments)

Notation as a Tool of Thought (1980) - https://news.ycombinator.com/item?id=10182942 - Sept 2015 (17 comments)

Notation as a Tool of Thought (1979) [pdf] - https://news.ycombinator.com/item?id=8392384 - Oct 2014 (12 comments)

Notation as a Tool of Thought - https://news.ycombinator.com/item?id=3118275 - Oct 2011 (19 comments)

karaterobot 17 hours ago

This readme file, which is about how notation systems influence thought, is something I refer to again and again. It's given me a lot of insights and ideas. I recommend it!

https://github.com/kai-qu/notation

It references Iverson, but is very anecdotal and wider ranging.

Pompidou 21 hours ago

J has revolutionized the way I think. I can confirm it: ever since I learned to program with it, I’ve been able to solve so many problems that previously felt far beyond my (limited) intellectual reach. For me, J (and by extension, the APL family more broadly) has contributed as much to my thinking as the methodologies I learned during my university studies in philosophy. Suffice it to say, it has profoundly and positively shaken me.

  • dayvigo 16 hours ago

    The other commenter asked about why J, but I want to more specifically ask, why did you choose J over K? I can't decide which to learn.

    • elcaro 13 hours ago

      Answering for myself, I started looking into APL, but at the time, learning the keymaps for the symbols seemed difficult, so I started learning array lang techniques with J, and ended up liking it.

      Some of the knowledge you acquire from one array lang can translate to the others, but the semantics are not 1:1 (ie. J is not just ASCII APL).

      As it currently stands, I dabble in almost all the array langs, but of the ones I use, I think best in J. It's got a lot of convenience in it's large set of primitives, and there's no "indirection" between the character you want, and the one you type. Still, it's not without it's shortcomings.

      I think I would probably still recommend BQN to most people looking to get into array programming, as it removes some warts in APL/J.

      You lose some convenience functions (base conversion, fixed-point, etc) but you get a language which is more uniform and intuitive. And FWIW, I had the keymaps learned within the first day I started to play with it.

      It's still a learning curve regardless of which lang you learn, but it will change how you think.

  • -__---____-ZXyw 21 hours ago

    a. How did you get into it?

    b. Why J, as opposed to another APL implementation? In particular, the different syntax, I suppose I mean.

    Thanks!

cess11 a day ago

Last year The Array Cast republished an interview with Iverson from 1982.

https://www.arraycast.com/episodes/episode92-iverson

It's quite interesting, and arguably more approachable than the Turing lecture.

In 1979 APL wasn't as weird and fringe as it is today, because programming languages weren't global mass phenomena in the way that they are today, pretty much all of them were weird and fringe. C was rather fresh at the time, and if one squints a bit APL can kind of look like an abstraction that isn't very far from dense C and allows you to program a computer without having to implement pointer juggling over arrays yourself.

  • zehaeva 19 hours ago

    A little off topic, but I am always amazed at the crazy niche podcasts that exist out in the world.

    As an example, I listened to a, now defunct, podcast about Type Theory. Wild esoteric stuff.

alganet 21 hours ago

> it lacks universality

"there can be only one" vibes.

---

The benefits of notation are obvious. It's like good asphalt for thinking.

However, can you make a corner in the dirt road?

"I don't like dirt" is not an excuse, right? Can you do it?

"I run better on asphalt". No shit, Sherlock. Are you also afraid of problems with no clear notation established to guide you through?

gitroom a day ago

man i always try squishing code into tiny spaces too and then wonder why i'm tired after, but i kinda love those moments when it all just clicks