In a low-level language like C or Java, when we allocate an array we have to specify the number of slots ahead of time. There's a reason for this. Recall that arrays get their O(1)O(1)O(1)-time-lookup superpower from the fact that they're contiguous (they're without gaps) in memory. So when we allocate an array, we have to ask the operating system to "reserve" a bunch of sequential buckets. If we didn't say how many buckets, the operating system wouldn't know how many spots to reserve.
But there are lots of cases where having to specify the size of an array ahead of time is tough. What if we wanted an array to hold email addresses of people who have signed up to learn about the private beta for our hip new app (Tinder for cake recipes)? We might get a million signups, or our moms might be the only ones who check it out. So how big should we make our array?
To avoid specifying the size of our array ahead of time, we could just kinda guess how big we want our array and resize it as needed. This is called a dynamic array.
So with a dynamic array, we just say "I want an array" and our environment says "cool no problem, let me get that for you" and it shrugs and says, "100 indices is probably good, right?" And it makes you your array:
You can start "appending" items to your array, and your environment will internally keep track of the "current_index" so it knows where to put stuff you're appending:
But at some point you'll be at index 99 and you'll want to append but your array won't be big enough!
(diagram. array is sweatin')
So you'll say ".append()" and to make it work, your dynamic array will do a few things under the hood:
Make a new, bigger array (usually twice as big).
Copy each element from the old array into the new array.
"Free up" the old array (tell the operating system that part of memory is available for other use).
Actually append your item.
Notice that we're not just extending the size of our current array (the next 100 contiguous memory slots might not be free!), we're allocating a whole new array. That's why we also have to copy each item from the old array to the new one.
We could call these special appends that require steps 1-3 above "doubling" appends since they require us to make a new array that's (usually) double the size of the old one. Appending an item to an array is usually an O(1)O(1)O(1) time operation, but a single "doubling" append is an O(n)O(n)O(n) time operation since we have to copy all nnn items from our array!
Does that mean that an append operation on a dynamic array is always worst-case O(n)O(n)O(n) time? Yes. So if we spin up an empty dynamic array and append nnn items, that has some crazy time cost like O(n2)O(n^2)O(n2) or O(n!)O(n!)O(n!)?!?! Not quite.
While the time cost of each special, O(n)O(n)O(n)-cost "doubling" append doubles each time, the amount of O(1)O(1)O(1)-time appends you get until the next doubling doubles as well, at the same speed.
This "cancels out" and we can say each append has an "average" or "amortized" time cost of O(1)O(1)O(1)
Given this, in industry we usually wave our hands and say dynamic arrays have a time cost of O(1)O(1)O(1) for appends, even though strictly speaking that's only true for the "average case" or the "amortized cost" (the worst-case cost for any given append is O(n)O(n)O(n)).
In an interview, if we were worried about that O(n)O(n)O(n)-time worst case cost of appends, we might try to use a normal, non-dynamic array. The advantage of dynamic arrays over arrays is that you don't have to specify the size ahead of time, but the drawback is that some appends can be more expensive.
But what if we wanted the best of both worlds:
worst-case O(1)O(1)O(1)-time appends, and
Data structures are full of tradeoffs. Dynamic arrays give us dynamic sizing, but at the cost of worst-case append speed. Our next data structure will give us dynamic sizing without incurring that cost, but it'll incur a different cost. Choosing the best data structure depends entirely on what you'll be doing with it.