• frezik@midwest.social
    link
    fedilink
    arrow-up
    8
    arrow-down
    1
    ·
    edit-2
    2 months ago

    No, that’s not what RISC is about. There was some early attempts to keep the number of instructions low–originally, ARM didn’t have a multiply instruction, and there’s still a bunch of microcontrollers you can buy that don’t have a divide instruction–but it was quickly abandoned as it’s just not that useful. It only holds back instructions that optimize common cases. Your compiler can implement multiplication by doing addition in a loop, but that’s not very efficient.

    What really worked about it was keeping a separation between how memory is accessed. You don’t have an ADD instruction that can fetch from both registers or main memory. You have a MOV instruction that can fetch from memory into a register, and you have an ADD instruction that can work on registers.

    ARM still does this just fine.

    • DumbAceDragon@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      8
      ·
      edit-2
      2 months ago

      I’m a computer engineering major (still a student tbf), I’m well aware of the difference between CISC and RISC, I was making a joke.

      Also, I understand your point, but you should know though that a load-store architecture and a RISC instruction set are not the same thing. The vast majority of RISC ISAs are load-store, but not all load-store architectures are RISC.

      • frezik@midwest.social
        link
        fedilink
        arrow-up
        9
        ·
        2 months ago

        http://www.quadibloc.com/arch/sriscint.htm

        The RISC architecture contains several common elements. Some of them are no longer present in most chips that still call themselves RISC:

        • All instructions execute in a single cycle.
        • Floating-point operations, specifically, are therefore excluded.

        But most of the defining characteristics of RISC do remain in force:

        • All instructions occupy the same amount of space in memory.
        • Only load, store, and jump instructions directly address memory. Calculations are performed only between operands in registers.

        https://groups.google.com/g/comp.arch/c/IZP5KUJprHw?pli=1

        MOST RISCs:
        3a) Have 1 size of instruction in an instruction stream
        3b) And that size is 4 bytes
        3c) Have a handful (1-4) addressing modes) (* it is VERY hard to count these things; will discuss later).
        3d) Have NO indirect addressing in any form (i.e., where you need one memory access to get the address of another operand in memory)
        4a) Have NO operations that combine load/store with arithmetic, i.e., like add from memory, or add to memory. (note: this means especially avoiding operations that use the value of a load as input to an ALU operation, especially when that operation can cause an exception. Loads/stores with address modification can often be OK as they don’t have some of the bad effects)
        4b) Have no more than 1 memory-addressed operand per instruction
        5a) Do NOT support arbitrary alignment of data for loads/stores
        5b) Use an MMU for a data address no more than once per instruction
        6a) Have >=5 bits per integer register specifier
        6b) Have >= 4 bits per FP register specifier

        Note that none of this has to do with reducing the number of instructions, which is what people tend to think of when they hear the name.

        • barsoap@lemm.ee
          link
          fedilink
          arrow-up
          2
          ·
          2 months ago

          All instructions occupy the same amount of space in memory.

          Both ARM and RISC-V have compressed instructions. Dunno how ARM works but with RISC-V the 16-bit instruction set is freely interspersable with the 32 bit one, which also get their alignment reduced to 16 bits. Gets like 95% of the space reduction possible with full variable-width instructions without overcomplicating the insn decoder.

          As to addressing and loads and arithmetic: No such instructions, but every CPU but the tiniest ones are expected to do macro-op fusion for things like indexed loads. Here’s an overview.

          The MMU thing… well the vector extension can do gather/scatter, I guess it could stay within the letter of “use the MMU once” but definitely not the spirit.