While the programmer can see a lot of what is going on beyond a normal user experience, there are still many aspects of the machine's running remains invisible to them, such as clock frequency and physical memory size. Computer organization, also known as computer architecture, looks at the structural relationships between these components of the computer system. Familiar examples of this include looking at how the CPU acts and uses computer memory, as well as cluster computing and non-uniform memory access.
The point of it all is to design ever-improving computer architectures that can run more and more efficiently, maximizing performance with as low a power consumption and cost as possible. In order to optimize their software, programmers must know the processing ability of their CPUs so a detailed analysis of the computer organization must be performed, and must show improved performance. Computer architects must keep many things in mind in order to achieve this. Perhaps the most important of these aspects in terms of maximizing performance are a computer's logic design, functional organization and instruction set design. The choice of processor for a programming project is heavily dependent on how these components are structured - for example, multimedia efforts have need of rapid data access while some more supervisory software prioritizes a need for fast interrupts instead. Architects need to be clear as to what is available on their new version of a particular machine.
But how can one use a machine to design a better version of that machine? In the beginning stages, most of the desired behavior of the new computer is written on an existing one as software that can run programs using the designer's new idea for handling instruction sets. Then the architect will collaborate with compiler designers to work out the kinks in those instructions. Then they have an instruction-handling process that can be implemented as a new version of the original machine.