IMO the major remaining factor is probably compression time. I think some of the other factors could be viewed as two sides of the one coin, compression method and file type/data characteristic or compression method/algorithm and archive format/utility for example.
Once the relationships between these are established (in terms of how much compression can be achieved), the main limitation would then become a trade-off between the ultimate compression factor and the feasible time that can be devoted to the process. We could imagine that the maximum possible compression could be achieved by a utility that had numerous available algorithms optimized for maximum results with different data types, and analysed each file to use the ideal algorithm, or multiple algorithms if necessary for different sections of some files. This would be a time consuming process, and how do you work out whether 5% more compression is worth taking 50% longer? Where does the law of diminishing returns really set in?
Most utilities do this to some degree by allowing selection of compression strength, with estimations of size variation and time required.