> When you require massive concurrency and very small response times, SQLAlchemy simply requires too many resources. There is no way to avoid that.
For the loading full objects use case, you can stretch it much more by using query caching and hopefully Pypy will help a lot here as well. But you can ditch most of the object loading overhead by just requesting individual columns back, then using just the abstraction layer as the next level, then finally feeding your query into a raw cursor. But these are optimizations that you apply only in those spots that it's needed. You certainly don't have to ditch the automation of query rendering, the rendered form of a query can be cached too.
I also certainly agree that data access or service layers are a good thing, and perhaps you have a notion that "using an ORM" means "the ORM is your public data layer" - I don't actually think that's the case. But in my view you still want to make full use of automation in order to implement such a layer.
For the loading full objects use case, you can stretch it much more by using query caching and hopefully Pypy will help a lot here as well. But you can ditch most of the object loading overhead by just requesting individual columns back, then using just the abstraction layer as the next level, then finally feeding your query into a raw cursor. But these are optimizations that you apply only in those spots that it's needed. You certainly don't have to ditch the automation of query rendering, the rendered form of a query can be cached too.
I also certainly agree that data access or service layers are a good thing, and perhaps you have a notion that "using an ORM" means "the ORM is your public data layer" - I don't actually think that's the case. But in my view you still want to make full use of automation in order to implement such a layer.