There are a couple of ways of adding resources in the Cloud
Cloud Computing, when exercised in all of it’s glory is about Services rather than Servers - this often facilitated by some form of task scheduling such as Message Queuing (click here to read my introduction to Amazon SQS and how to best leverage it for Cloud Computing).
In this paradigm, adding resources to the Service (again, Servers are irrelevant) is dome simply by adding more servers to our Service’s pool, rather than adding resources to any server in particular.
But not every deployment is a textbook example of Cloud Computing - and that’s fine. Sometimes we have to deal with a component that simply can’t be distributed between servers (often the case with database servers), and sometimes we are running workloads so light that it really makes more sense to throw more resources at our single server rather than face the complexities of re-architecting our application to scale between multiple servers.
Not all Instances were created (launched) equal
At it’s essence an Instance is an Amazon Machine Image (AMI) that has been deployed to a (virtual) machine and powered-up - hence the term Instance. It is a (running) Instance of an (Amazon Machine) Image. Going back to the machine part of the equation, AWS offers different Instance types (read: sizes), each with a predefined amount of CPU, RAM and I/O (priority) resources. Note that Instance types are either 32 or 64 bit - we’ll get back to this important detail in just a bit.
When we provision a new Instance, we either explicitly specify a type (read: size), or just default to small, which comes with 1.7 GB of memory and 1 EC2 Compute Unit.