Connect 2017: Notes from the first day

A year (and a few weeks) have passed and I am at my second IBM Connect conference. Firstly, I am very grateful to be here, so THANKS BOSS!! (You know who you are, even if you wish you didn’t know.) Last year, I gave a very in depth summary of the entire day and really went into what I liked and did not like, but today I want to more or less write a few thoughts down for myself for a later time. Okay, since you asked so nicely, maybe I will give a brief summary of what was going on.

As is normal for these sorts of events, there is a somewhat longer opening session. Before I continue, it is important to realize that everyone who goes to these conferences has different goals. I do not work in marketing, and I do not work in sales. I am a programmer.  A simple basement alien typing on the keyboard praying to the mighty golden hard drive that no one calls and I can just keep going… (or so I tell everyone… also, in case my boss reads this? ehm…. just kidding…) Getting to my point, these opening sessions are very much geared towards selling stuff. At least I feel it is.  There are a lot of Buzzwords, and buzzword bingo… (seriously.. bingo after not 2 minutes?  I call shenanigans on that one…), and of course there are tips on where the IBM road is going to take us in the months and years to come. It is not my place to tell you what is planned, and that is also not the point of this article, if there is a point to begin with.

Having said this, there are a few things to shout out… Huge applause to the one-woman band ‘Kawehi‘, who is also listed on the front page of the Connect News Today mini newspaper! It may not have been my type of music, but your energy and enthusiasm was key in bringing people into the mood of the opening sessions!

Also, special mention to Dr. Sheena Iyengar, who was the special speaker at this years opening. I think this woman’s part of the first opening sessions was the most intriguing, as well as the most important. Thank you for your insight.

Moving on from the opening, I attended two break out sessions today. The first dealt with use cases for cognitive connections. I expected more code, but it was still interesting. But the second session I visited was what really fired my imagination. Paul Withers and Christian Güdemann presented on GraphQL. I have to admit that I did not read the description of this session before adding it to my schedule. I expected to hear about openNTF and ODA upgrades regarding GraphDB.  This was a topic that was discussed last year by Nathan Freeman. GraphQL has nothing to do with GraphDB. GraphQL is a layer above the storage layer. It is a way of transferring data from a server to a client. To that end, there is a provider and a consumer. The consumer would in most cases be a website, or in some cases the server side processing for a web application. The provider would most likely be on the server hosting the database. It does not have to be (in theory), but I could imagine that it would make data retrieval that much faster.

As everyone who reads this site is aware, I am primarily involved with XPage development. Furthermore, this summer does not mark only ten years of living in Germany, but also 5 years of being a full time State Certified Programming Professional, and 8 years working at holistic-net GmbH including my apprenticeship. I am the main responsible for two large applications. The first I built from the ground up using a pre-existing Notes Application consisting of multiple Notes databases.  The second I inherited and did my best to rebuild with a very limited budget.  If there is one thing that I have noticed, it is that the biggest issue is with retrieving data quickly. I have had numerous issues with full text index searches hitting performance walls like a drunk in a labyrinth. What is worse is that there has never been any reproducible pattern.  Everything is great and then *faceplant* (followed by the request for another beer). So I find the idea of a new search provider very appealing. That is, if it can work with domino…

One thing that was mentioned in this session, is the ability to use tools from darwino in order to implement a domino graphQL provider. I must say right now that I have not looked into this tool myself yet. There were also a few caveats to this tool that I would need to verify before risking to suffer anyone’s wrath, or getting anyone into trouble when in the end, I just remembered something incorrectly.

What I am taking away from today, is the idea of creating a better way to search for data in domino and provide that data to any consumer using graphQL, and then consume that data by any front end that wants it, whether it be ASP.NET MVC or XPages while making development as quick as possible for everyone. And one thing is clear. I have a lot of research to do…..

OutOfMemoryError Follow Up

After spending a great deal of time testing, looking… showering…. I finally managed to locate an error that was causing the problem, but this requires knowledge of how the JVM works in order to fully understand.  At its core, however, is what I see as a bug in the domino API. (NOT ODA!)

Just to quickly go through the environment: we are dealing with a domino 9.0.1 server, running the ExtLibs, as well as ODA. The application in question is only using the ExtLibs, although ODA is installed onto the server. It is not listed as a project dependency.  ExtLibs is used only to get the current database. A fixpack round about 42-ish is being used, I do not have the number memorized.

To reproduce the problem, I created a database with only two xpages, and only to methods.  The first method created 120,000 documents.  Each document had only two fields that were manually set: Form and Status.  To set the status, I used creatingDocNum % 3 to makes sure that a third of all created documents had the same value. We should have 40,000 documents with the status set to “0” and so on.

The next XPage executed a search over these documents looking for all documents with that form name and the status “0”.  As stated, there would have to be 40,000 hits in the database. When performing a lotus.domino.database.search(String, Date, Integer), we are returned a lotus.domino.DocumentCollection.  The getCount() returns 500, (I used 500 as the maximum document count). When iterating over the documents, I put the universal id (toUpper()) in a hashmap, as well as counted the iterations. After each loop, I requested how much memory was remaining.  Once a certain minimum value was reached, I jumped out of the iteration loop.  I printed the hashmap size, the iteration count, and the value returned by the getCount() of the collection object.  I was well over the desired 500 document count (anywhere between 1500 and 6000 depending on the memory available) and the getCount() always returned 500.  A PMR has been opened for this case.

My work-around is two-pronged.  The first bit is easy.  I simply jump out of the iteration when enough documents have been iterated over. The second bit is that I constantly check how much memory is free.  Once I hit a minimum, I also jump ship. The appropriate message is displayed to the user and he can refine the search, or try again later.

But this is sadly not enough for my ‘want-to-know-everything’ attitude (though in reality I can never know enough), during my testing I found that the available memory always was set back down to 64 MB….

Here is the point where JVM knowledge is paramount. The Runtime always wants to make the smallest memory footprint possible. To that end, when garbage collection is performed, the amount of memory available to the runtime is recalculated.  If the available memory is small enough, it will allocate more memory up to the maximum configured value. If there is a bit of free memory, then it will lower the available memory. All well and good…. Normally… What if we have a few users who go in and start a massive search at the same time… because that is what they are there for. Call up their data and have a good day. We could enter a situation where that 64 MB RAM is just not going to cut it. Furthermore, because these massive calls are happening simultaneously, we just entered a situation where the runtime is not going to allocate enough memory fast enough. Even though we set the ini to use a maximum memory of 512MB, we are getting an OutOfMemoryError at only 64MB.

Enter the XPages Gods who have not only mastered development but are more than their hips deep into domino administration…..  (in other words google and some awesome blogs…)

LET ME SAY THIS WITH EXTREME CAUTION!!!

Setting HTTPJVMMaxHeapSize=512M is not enough.
Setting JVMMinHeapSize= 128M may be necessary.

I am always very careful before saying that we need to allocate more memory. This is because of how domino works. I go through a checklist to verify the following:

  1. We are not throwing more memory at a memory leak. (are we using recycle() appropriately and correctly?)
  2. How many XPage applications are using on the server (each App runs a new JVM [normally])
  3. The server host can handle it, i.e. enough RAM is physically installed onto the machine.
  4. The problem is not limited to a situation that can be fixed another way that also makes sense.

As a side note, I have found this this error occurs whether or not the openNTF Domino API is used or not. Naturally I have spent more time reproducing the error for IBM with their API than with ODA.

So there we have it. A nice little bug that has been handed over to the guy with a fly-swatter. Happy Programming!

EDIT

The OutOfMemoryErrors were a result of processing the documents and putting fields of the documents into java objects that were then stored in a List in the view or session scope. The OutOfMemoryError was not a direct result of performing the search, but rather caused by the bug: the search delivers a DocumentCollection object that has more documents than it should and the getCount() method that returns the desired result, not the amount of documents that are actually in the collection.

XPage Opinions

<rant><bitching>

I had an interesting experience today that I would like to share. I met a group of Notes power users today for a small and informal gathering. The food was good, the alcohol was flowing (so was the soda), and we all started talking.  Some of us were developers, some of us were administrators, some of us were in management, all of us use Notes. We got on the topic of IBM stuff and I must say that it was a generally tricky topic.  The general topic of Notes was fine, but when it came to XPages…. Lets just say it was a rough time. As it turns out, there are people that love to hate on XPages.  I must say, I used to be one, but I converted…

There are a few arguments that I want to touch upon.  The first one is that it is impossible to take on projects done by others.  This is a very good point even if I am not in complete agreement.  Unfortunately, we were not agreed as to the cause of this trouble. One person said that the trouble is based solely on the different ways to “hide” the code. Yes, I will say that this is an issue, but I am also not sure we were talking about the same stuff.  General style when building XPage applications is problem, but it is not more or less problematic than in classic notes development.  The question of whether to use a script library or put the calculations directly into the Form design element itself is the same argument that I have with using JavaScript libraries and putting it into the controls themselves.  The correct answer is also clear, use the damned library… That is what it is there for. For me, this is also not a reason that taking over someone’s project should be difficult. Refractoring is needed, but if the project was done correctly, then chances are the original programmer is still working on it. Another possibility was the location of Java code.  There are a few places where you can place such code.  The Java Design element is usable (from what I have read), and some of us prefer to use the good old Web Content/WEB-INF/src folder located in the java perspective. THIS IS A STRENGTH IN XPAGES, SO STOP HATING ON THE FRAMEWORK!!!!!

Another reason I heard today to hate XPages was that the applications are slow or are unstable.  This excuse makes me angry… very angry. The applications are as stable and as fast as the programmer allowed it to be.  And for this point, I am going to point and laugh at every single corporate manager who says that it is cheaper and easier to outsource your development to overseas countries where you only have to pay a hundred dollars a day for programming services instead of using your next door neighbor who gets a thousand.  You will get what you pay for! Of course, we cannot all know from birth how XPage applications should be run, and it takes a while to learn the proper methods of using error handling (shout out to the dreaded NotesException), and learning how to properly recycle objects in order to prevent killing other functions, and building a proper cache and balancing CPU vs Memory…. These things are not easy.  Not every one can do this.  THIS IS NOT XPAGE’S FAULT, SO STOP HATING ON THE FRAMEWORK!!!!!

Another problem is our idea of what XPage development is.  XPages is a way to program modern web based applications which may or may not use a domino data source and which run on a domino server.  IT IS NOT RAD (rapid application development).  At the very least, it is not the RAD that many remember in classic notes.  Much more planing needs to go into XPage development.  A lot more skill is needed.  I wider range of skills is needed.  JavaScript, Java, DOJO, JQuery, GUI/XML, architectural skills…. not every Joe who knows Excel and spreadsheets can do this! It is still more rapid than building a JSF/JSP application from scratch, but a totally different ball park.  THIS IS MODERN APPLICATION DEVELOPMENT AND NOT XPAGE’S FAULT, SO STOP HATING ON THE FRAMEWORK!!!!!

Instead, let me tell you the awesomeness that is this framework.  This framework offers modern web based applications.  It offers a way to combine the tested and true Domino nature with the scalibility and efficiency of SQL.  It offers easy binding of third party software into your applications. A separation of data and GUI allows for a much more robust and rich application that does not create a dependency on certain servers or data sources but rather abstraction which allows almost anything. The deletion of a certain server does not require any complex desktop processes to make sure that the tile on the client is switched to the proper database instance.  More control is granted to the application design due to easy access to modern JavaScript libraries. Sharing of code between application instances with the use of self-built OSGi libraries running on the server enable build once – copy never functionality not seen before in notes applications…  The reasons and advantages of using XPages in your environment are manifold.  Don’t get frustrated at the first glance or first failure and say that the whole thing is shit.  Open your mind to what is now possible, change your perspective to see that this is not classic Notes, and learn the correct ways to use this framework.

</bitching></rant> …. doing is up to you…

holistic application management

I do not normally do marketing type stuff, but today I want to make an exception.

Here at holistic-net, we often are put in charge of tasks where we need to quickly find information about a company’s application environment.  Sometimes this is because we are planning a migration, and sometimes it is just because we need to see what is available on the server already before we start new development.  Other times, we lack access to certain databases, but we still need current ACL or last access information.  holistic-net application management (or ham for short) is the tool that I like to use.

Ill give a short example of when we needed it.  About two weeks ago, one of our customers decided that they wanted to shut down an existing server and move a few of the applications to another server.  We normally do not need access to those applications, so we were naturally not included in the ACLs. Had we had access to a current ham tool, we could have easily found out which applications were last used, we could have seen the ACLs, and we could have seen what possible replicas are found on other servers and in other domains.  This would have made the work we did in 8 man hours (not including waiting times for ACL changes and email/telephone correspondence), a 1 man hour job.  Another task that was not really possible was finding out who was responsible for which applications as well as other meta-data that we could not find in the catalog.nsf, or by other means.  ham offers a central place to keep all application meta-data clean and up to date.  All we would have needed was to be given access to the ham data application and we could have found what we needed quickly and easily.

Here is a quick excerpt from our website.  Ignoring the marketing jargon, as a developer who also needs to maintain the integrity of the customer’s servers, I find ham a tool that I cannot work well without.  Everything else is too expensive in the terms of man-hours needed for workarounds. For those of you who can understand German, I am including a demonstration video that one of my colleagues recorded.  Please write in the comments if you would like a translated transcript of the video.

If you are interested in trying this application on your servers, please write to me, or to holistic-net at   sales@holistic-net.de

ham

Administration over the complete company-wide application environment

holistic application management (ham) is the perfect tool for the following tasks:

  • Use this tool with our support for creating assessments to use as a foundation for a data migration.
  • Administrate and configure your entire company wide Domino / Notes application environment quickly, efficiently, cheaply, and transparently.
  • Automatically calculate your IT costs and link them with cost calculations .

holistic application management combines an application’s technical data, which is periodically and dynamically imported, with meta data, which is partially generated automatically and partially inputted manually. This information can be made available to many target groups in a company wide information portal.