Let me try to answer your question to the best of my capabilities.
1. Indeed it is a large system because everything is trained in a supervised fashion. Large doesn't seem to do well in nature because their replication capacity is far inferior to small animals or insects. It is never like small have more intelligence that's why they survived. Replication for machines and animals is a totally different thing. This kind of system is only expensive initially to train not during the inference. Quantum computers of today are not much useful to train these types of models.
2. Google AI research team has full access and it won't be made available to the public anytime soon. Costing is unknown. These systems can make predictions in a few seconds or even faster.
3. There are physics engine in which we can simulate a lot of actual environments. One simulation that was mentioned in the paper was robot trying to pick objects and arrange them in a particular fashion with all the physics rules applied in the simulated environment.
4. Since it has seen a tremendous amount of data, the model supposedly generalizes well thus it doesn't need to predict many iterations in the future, only one prediction at a time is sufficient. It's goal is not defined by businesses, as the model evolves people will find new test cases and adapt according to their needs.
Hope I answered some of your querries.