ORMs can be slow. However, they are also very useful and central to Frappe's entire metadata and document centric model.
Frappe's ORM can trigger validations and all kinds of side-effects via document hooks. Like it, or not, these hooks are how one writes Frappe code. So, the ORM is critical on the write-path, if (far) less efficient than raw SQL writes. We have previously explored this topic for bulk writes.
Some of our apps have read heavy APIs where a number of documents are read during a single API transaction. Until recently we had no choice but to choose one of the following:
frappe.get_doc and accept the performance penalty,
However, I recently discovered
frappe.get_cached_doc and it's essentially a free performance upgrade. The documentation is pretty straightforward:
get_cached_doc is equivalent to the return value of
doc.db_set do not update the cache. This does mean our
bulk_insert method does not update the cache. Thankfully, there's
What does this mean in terms of performance? A cool 10000x+ increase in read throughput. See the code below for my simplistic benchmark.
from timeit import timeit
return frappe.get_doc(doctype, docname)
return frappe.get_cached_doc(doctype, docname)
Sure, caching documents needs more RAM. But RAM is dirt cheap these days, so go ahead and replace
frappe.get_cached_doc in your performance critical paths and enjoy an immediate boost to API responsiveness.