A program that uses more memory than is physically available will swap to disk. Swapping takes nearly forever, and must be avoided. In some cases it will cause all programs running on that computer to run very slowly and the computer to appear dead.
You can tell how much memory your Stata job is taking with the stata -memory- command or you can estimate it from the product of the number of observations times the size of each plus a little extra. Floats are 4 bytes, doubles 8, strings as many bytes as characters, etc. If you are using any medium or larger dataset you should keep in mind the approximate size so that you never start your job on a machine with insufficient available memory. The "showload" Linux command will show the amount of available memory on each machine in the NBER cluster. "top" will show the memory your program and others are using. Try to leave some (a lot, actually) for other users.
There are many ways to conserve memory in Stata.
Compress is the absolute minimum any medium or larger Stata program should do to save memory. Unless you have SSNs stored as doubles or otherwise have data that requires double precision, these two commands may save quite a bit space with no additional bookkeeping of variables:
There is some information about memory usage in the Stata manual and in this post by Bill Gould.
Note that R has no float datatypes with less than 8 bytes, so a very large problem might be possible in Stata and not in R.