1 Performance ideas
Sam McCall edited this page 2020-12-18 11:02:21 +01:00
  • persistent/shared preamble cache
  • parallelize IO and parsing by #include-scanning ahead (parallelize IO itself for supporting VFSes?)
  • module support/inference
  • cache Sema code completion result set rather than reparsing on each keystroke (replay index query etc)
  • improve allocation/memory usage: https://reviews.llvm.org/D93452