DLively, not sure if you are serious or not, but this isn't how computers work.
Even very simplified grammars like that of C (simplified compared to English) can't be run on a CPU directly, it's for human consumption only. You need a compiler to compile it down to instruction that the CPU can actually understand, and they have little to no resemblance to what you would write in a high level language like C.
A CPU provides a so called instruction set, a set of instruction that it is capable of executing. You will almost always find instruction to read bytes from memory into CPU registers, store CPU registers into memory, do arithmetic operations on register contents and branching instructions to alter the flow of execution. The instruction set you are (unknowingly) working with is called x86, and it's THE desktop instruction set. It is old as fuck, grew over the years to an absolute beast and is supported by Intel and AMD. On mobile and embedded devices you will most commonly find ARM CPUs which provide the ARM instruction set, completely different than the x86 one. Of course, newer generations of AMD and Intel CPUs also support the x86-64 instruction set, the 64bit instruction set that is backwards compatible with x86 but adds 64bit support and a slew of other instructions (and deprecations, when the CPU is run in 64bit mode).
x86 is what is commonly called a CISC instruction set. CISC stands for "complex instruction set computer", and it's an idea from 80's which basically translates to: Let's add as many highly specialized instructions as possible onto the CPU. As a result, x86 CPUs have support for hardware random number generation, AES encryption and decryption and a shit ton of other stuff. But, it's still very low level and a far cry from anything high level such as C and its standard library.
Let's assume a function that operates on two vectors and copies their x/y/z components. The compiler will generate machine code that first loads the addresses of the pointers to the two vectors into two registers, and then code that moves three times 4 bytes from one address to the other.
In assembler, this looks like this (comments mine, the code was generated by the Clang compiler):
mov 0x11e004, %eax // Load the address (0x11e004) of the source vector into the EAX register
mov (%eax), %eax // Load the 4 bytes at the address found in the EAX register from RAM into the EAX register
mov 0x11e000, %ecx // Load the address (0x11e000) of the destination vector into the ECX register
mov %eax, (%ecx) // Store the contents of the EAX register into the address that the ECX register points to (aka transfer into memory)
// Same deal, but, the pointers are offset by 4 bytes (this is the y component)
mov 0x11e004, %eax
mov 0x4(%eax), %eax
mov 0x11e000, %ecx
mov %eax, 0x4(%ecx)
// And again, this time with an offset of 8 bytes (this is the z component)
mov 0x11e004, %eax
mov 0x8(%eax), %eax
mov 0x11e000, %ecx
mov %eax, 0x8(%ecx)
ret // Return to the caller
You may or may not have noticed that despite x86 being a CISC instruction set, it doesn't have instructions to move data directly in memory. You first have to load it into a CPU register and then store it.
And no, this is NOT what the CPU sees. Assembler, again, is for human consumption only. It is more closely to what the CPU will see eventually, but it's still not quite there. It does lack a lot of the high level niceness of C though. In C, the very same would look like this (in fact, this is what I threw at the compiler to get the assembly from earlier):
void test()
{
// vec1 is 0x11e000 and vec2 is 0x11e004
vec1->x = vec2->x;
vec1->y = vec2->y;
vec1->z = vec2->z;
}
If you have assembler code, you throw it at an assembler which finally generates the actual machine code out of this. What the CPU will see eventually is the following (same format of the assembler code. Each line is one instruction):
a1 04 e0 11 00
8b 00
8b 0d 00 e0 11 00
89 01
a1 04 e0 11 00
8b 40 04
8b 0d 00 e0 11 00
89 41 04
a1 04 e0 11 00
8b 40 08
8b 0d 00 e0 11 00
89 41 08
c3
Except, it's still not exactly what the CPU sees, because this is, again, for human consumption. It IS what the CPU sees in regards to that the hex numbers represent the value of one byte each, but the CPU doesn't see it as text but consumes the bytes).
And here is where thing start to get immensely complex. Modern CPUs are absolute beasts in what they do. They are beyond fucked up and the things that are done to allow as man instruction to retire as fast as possible are insane. You could fill books with one CPU generation alone. Things have come a long way since the first steps in micro-processors, and the worst thing that got in the way were the laws of physics. I'll spare you that for now, mostly because it would require writing at least five more paragraphs to explain a couple of more things in high level before even considering an actual CPU.
For completeness sake though, (you can skip everything now), ARM has what is commonly known as a RISC instruction set, where the R stands for reduced. It has a couple of very general instructions that you have to use together to get the specialized behaviour that you might have gotten out of a CISC instruction set. Usually RISC instruction sets are easier for the CPU to execute because instructions have a fixed length (note how above, the instructions have varying lengths), so things like instruction fetching can be done faster. Also this whole micro-ops thing which I'm not getting into for today.
Questions? I have barely covered anything and left out a huge deal of information, so if anything seems incoherent, ask ahead.
Edit: Google keywords that might be interesting to get some deeper knowledeg into how CPUs work and why they are the way they are (this is an incredibly deep rabbit hole. Beware):
- Pipelined CPU design
- Superscalar CPUs
- CISC
- RISC
- x86 instruction set
- MIPS instruction set
- Out of order execution
- Register renaming
- Signal propagation
- Contamination delay