To the right is a random curve (periodically changing) with the region below it shaded in . We are interested in computing (estimating) the area under this curve. In a Calculus course, students are taught that the area under the curve can be found by integrating the function that represents the curve over a closed interval. The Monte Carlo method is a probabilistic method for estimating the area under a given curve (among other things). The particular method used in the periodically updating images is called the "Hit-or-Miss" method.
In the "Hit-or-Miss" method, we uniformly, randomly sample a region that encloses the area we want to estimate. The red dots represent randomly sampled points outside of the area of interest (misses). The blue dots represent randomly sampled points inside the area of interest (hits). There are a total of 5000 dots. As such, an estimate for the shaded region is $$\frac{\# \mbox{ of blue dots}}{5000}$$ In the image on the right, we have that the shaded region covers approximately of the entire bounding region. Thus, if we knew the area of the bounding region, then we would also know the area (in estimate) of the shaded region (we just scale).
For the motivated student: how do we know how good our estimate is? [Hint: inference for a proportion] What happens when we increase the number of sampling points by a factor of 100?