After spending a great deal of time testing, looking… showering…. I finally managed to locate an error that was causing the problem, but this requires knowledge of how the JVM works in order to fully understand. At its core, however, is what I see as a bug in the domino API. (NOT ODA!)
Just to quickly go through the environment: we are dealing with a domino 9.0.1 server, running the ExtLibs, as well as ODA. The application in question is only using the ExtLibs, although ODA is installed onto the server. It is not listed as a project dependency. ExtLibs is used only to get the current database. A fixpack round about 42-ish is being used, I do not have the number memorized.
To reproduce the problem, I created a database with only two xpages, and only to methods. The first method created 120,000 documents. Each document had only two fields that were manually set: Form and Status. To set the status, I used creatingDocNum % 3 to makes sure that a third of all created documents had the same value. We should have 40,000 documents with the status set to “0” and so on.
The next XPage executed a search over these documents looking for all documents with that form name and the status “0”. As stated, there would have to be 40,000 hits in the database. When performing a lotus.domino.database.search(String, Date, Integer), we are returned a lotus.domino.DocumentCollection. The getCount() returns 500, (I used 500 as the maximum document count). When iterating over the documents, I put the universal id (toUpper()) in a hashmap, as well as counted the iterations. After each loop, I requested how much memory was remaining. Once a certain minimum value was reached, I jumped out of the iteration loop. I printed the hashmap size, the iteration count, and the value returned by the getCount() of the collection object. I was well over the desired 500 document count (anywhere between 1500 and 6000 depending on the memory available) and the getCount() always returned 500. A PMR has been opened for this case.
My work-around is two-pronged. The first bit is easy. I simply jump out of the iteration when enough documents have been iterated over. The second bit is that I constantly check how much memory is free. Once I hit a minimum, I also jump ship. The appropriate message is displayed to the user and he can refine the search, or try again later.
But this is sadly not enough for my ‘want-to-know-everything’ attitude (though in reality I can never know enough), during my testing I found that the available memory always was set back down to 64 MB….
Here is the point where JVM knowledge is paramount. The Runtime always wants to make the smallest memory footprint possible. To that end, when garbage collection is performed, the amount of memory available to the runtime is recalculated. If the available memory is small enough, it will allocate more memory up to the maximum configured value. If there is a bit of free memory, then it will lower the available memory. All well and good…. Normally… What if we have a few users who go in and start a massive search at the same time… because that is what they are there for. Call up their data and have a good day. We could enter a situation where that 64 MB RAM is just not going to cut it. Furthermore, because these massive calls are happening simultaneously, we just entered a situation where the runtime is not going to allocate enough memory fast enough. Even though we set the ini to use a maximum memory of 512MB, we are getting an OutOfMemoryError at only 64MB.
Enter the XPages Gods who have not only mastered development but are more than their hips deep into domino administration….. (in other words google and some awesome blogs…)
LET ME SAY THIS WITH EXTREME CAUTION!!!
Setting HTTPJVMMaxHeapSize=512M is not enough.
Setting JVMMinHeapSize= 128M may be necessary.
I am always very careful before saying that we need to allocate more memory. This is because of how domino works. I go through a checklist to verify the following:
- We are not throwing more memory at a memory leak. (are we using recycle() appropriately and correctly?)
- How many XPage applications are using on the server (each App runs a new JVM [normally])
- The server host can handle it, i.e. enough RAM is physically installed onto the machine.
- The problem is not limited to a situation that can be fixed another way that also makes sense.
As a side note, I have found this this error occurs whether or not the openNTF Domino API is used or not. Naturally I have spent more time reproducing the error for IBM with their API than with ODA.
So there we have it. A nice little bug that has been handed over to the guy with a fly-swatter. Happy Programming!
The OutOfMemoryErrors were a result of processing the documents and putting fields of the documents into java objects that were then stored in a List in the view or session scope. The OutOfMemoryError was not a direct result of performing the search, but rather caused by the bug: the search delivers a DocumentCollection object that has more documents than it should and the getCount() method that returns the desired result, not the amount of documents that are actually in the collection.