Imagine that you saw frequent (but not constant) spikes in your application usage. Because the server that handles the requests from your application is not serverless, it needs to be upgraded (as a cost to you or your company) to be able to handle the additional load. In times of low usage, the server does not have less resources. You upgraded it to be able to handle a specific load of users. It will always be running at this level of performance, and as you know, performance comes at a cost.
With serverless computing, the resources are automatically scaled up and down as demand increases and decreases. This is a much more efficient way of using a server, because you are not paying for underutilized computing ...