About reederProgramming

I already have an about me page, so I will just put a quick bit of info here. I am a Notes/Domino developer at holistic-net GmbH located in Hannover, Germany. I use Java primarily at home and as often as I can at work. I have dabbled in C# and a few other languages and platforms. This website started out as a place for me to quickly store and access some of my most important how-tos and has started to become a place where I can really help others too!

Connect 2017: Notes from the first day

A year (and a few weeks) have passed and I am at my second IBM Connect conference. Firstly, I am very grateful to be here, so THANKS BOSS!! (You know who you are, even if you wish you didn’t know.) Last year, I gave a very in depth summary of the entire day and really went into what I liked and did not like, but today I want to more or less write a few thoughts down for myself for a later time. Okay, since you asked so nicely, maybe I will give a brief summary of what was going on.

As is normal for these sorts of events, there is a somewhat longer opening session. Before I continue, it is important to realize that everyone who goes to these conferences has different goals. I do not work in marketing, and I do not work in sales. I am a programmer.  A simple basement alien typing on the keyboard praying to the mighty golden hard drive that no one calls and I can just keep going… (or so I tell everyone… also, in case my boss reads this? ehm…. just kidding…) Getting to my point, these opening sessions are very much geared towards selling stuff. At least I feel it is.  There are a lot of Buzzwords, and buzzword bingo… (seriously.. bingo after not 2 minutes?  I call shenanigans on that one…), and of course there are tips on where the IBM road is going to take us in the months and years to come. It is not my place to tell you what is planned, and that is also not the point of this article, if there is a point to begin with.

Having said this, there are a few things to shout out… Huge applause to the one-woman band ‘Kawehi‘, who is also listed on the front page of the Connect News Today mini newspaper! It may not have been my type of music, but your energy and enthusiasm was key in bringing people into the mood of the opening sessions!

Also, special mention to Dr. Sheena Iyengar, who was the special speaker at this years opening. I think this woman’s part of the first opening sessions was the most intriguing, as well as the most important. Thank you for your insight.

Moving on from the opening, I attended two break out sessions today. The first dealt with use cases for cognitive connections. I expected more code, but it was still interesting. But the second session I visited was what really fired my imagination. Paul Withers and Christian Güdemann presented on GraphQL. I have to admit that I did not read the description of this session before adding it to my schedule. I expected to hear about openNTF and ODA upgrades regarding GraphDB.  This was a topic that was discussed last year by Nathan Freeman. GraphQL has nothing to do with GraphDB. GraphQL is a layer above the storage layer. It is a way of transferring data from a server to a client. To that end, there is a provider and a consumer. The consumer would in most cases be a website, or in some cases the server side processing for a web application. The provider would most likely be on the server hosting the database. It does not have to be (in theory), but I could imagine that it would make data retrieval that much faster.

As everyone who reads this site is aware, I am primarily involved with XPage development. Furthermore, this summer does not mark only ten years of living in Germany, but also 5 years of being a full time State Certified Programming Professional, and 8 years working at holistic-net GmbH including my apprenticeship. I am the main responsible for two large applications. The first I built from the ground up using a pre-existing Notes Application consisting of multiple Notes databases.  The second I inherited and did my best to rebuild with a very limited budget.  If there is one thing that I have noticed, it is that the biggest issue is with retrieving data quickly. I have had numerous issues with full text index searches hitting performance walls like a drunk in a labyrinth. What is worse is that there has never been any reproducible pattern.  Everything is great and then *faceplant* (followed by the request for another beer). So I find the idea of a new search provider very appealing. That is, if it can work with domino…

One thing that was mentioned in this session, is the ability to use tools from darwino in order to implement a domino graphQL provider. I must say right now that I have not looked into this tool myself yet. There were also a few caveats to this tool that I would need to verify before risking to suffer anyone’s wrath, or getting anyone into trouble when in the end, I just remembered something incorrectly.

What I am taking away from today, is the idea of creating a better way to search for data in domino and provide that data to any consumer using graphQL, and then consume that data by any front end that wants it, whether it be ASP.NET MVC or XPages while making development as quick as possible for everyone. And one thing is clear. I have a lot of research to do…..

Trials with JPA and XPages

Intro

Don’t get me wrong, I love working with Domino and documents and all that 😉 , but with XPages I sometimes want to use good old fashioned SQL or even hybrid solutions. I do not want to be bothered with writing sql statements myself in the DAO classes, and I just want to use a prebuilt solution that just works. Enter JPA.

Before I go too much into this, I want to thank the guys on Slack for giving me a hand with the bits and pieces of this. I had a great deal of problems getting this thing going and it is still very far from perfect. I can at least say, however, that it runs.

Quick trial history

I started out with a simple PoC concept of trying to build a type of registry for DVDs I have. (Not creative, just quick and simple). I also decided to do this with eclipselink and built in Eclipse mars. I quickly built a few entity classes and I used the xml orm to map those entities to the database. When this was done, I exported that project to a jar file and then imported that jar into an *.nsf. This first try failed. I am sure I can think of numerous reasons why it did not work. The main issue I had was that the persistence unit I tried to configure could not be found on the runtime. At this point, I copied the eclipselink jars into the nsf directly, and copied the entities into the .nsf.  In other words, the JPA Layer was no longer its own single jar. This allowed me to try to move the configuration files around to other locations. I tried everything I could think of. Fail.

Rethink and Rework

(in other words, ask around…)

Let me just say that if you are not already a member of the XPages Slack community, why aren’t you? JOIN!!! xpages.slack.com

I went into the random channel, and posted a general question if anyone had success with JPA and XPages. Jesse Gallagher pointed out this post by Toby Samples which uses the hibernate JPA framework. I had seen this presentation before, but I must admit that it lacks the meat and potatoes needed to get it off the ground.  Dont get me wrong, it is a great resource! The other reason why I did not do much with this at first is that it was done in eclipse and not Notes Designer. Most of the stuff that I program for XPages is done directly in Designer and not in eclipse. After talking to the guys on Slack, and seeing that Toby Samples had success with hibernate, I decided to indeed give it a try.

Downloading Hibernate

In the slides (see the above link), Toby describes talks about hibernate tools being downloaded and installed in eclipse. As we all know, Notes is now based off of eclipse, albeit an older version of eclipse…  After a lot of searching, I did find an updateSite with the tools and I downloaded a copy of them. I then installed them onto my client as a Widget and also onto the server. This really did nothing worth while.  I could not access the tools in designer, nor were they available on the server. I deleted them pretty quickly. Instead, I found this site to download an older version of the ORM. It is necessary to take the 4.3.11 version because Notes/Domino runs on a bitterly out-dated version of java. Once this is downloaded, I imported the required jars into my .nsf. I also put these jars into the <domino>/jvm/lib/etc/ directory as described in the slides. The only issues I had at this point was that designer couldn’t process the source quickly enough to give me the code suggestions and I had the feeling that designer was always a step away from crashing.  Indeed it did crash once or twice… (After a system restart it seems to have gotten better)

Configuration and Setup

The first thing that I did was to create the hibernate.cfg.xml file. This is all pretty straight forward.  I am also not going to discuss how this file is to be created, but I will show you my copy…

<?xml version="1.0" encoding="utf-8"?>
<!DOCTYPE hibernate-configuration PUBLIC 
"-//Hibernate/Hibernate Configuration DTD 3.0//EN"
"http://www.hibernate.org/dtd/hibernate-configuration-3.0.dtd">

<hibernate-configuration>
   <session-factory name="hibernateSessionFactory">
        <property name="hibernate.connection.driver_class">com.mysql.jdbc.Driver</property>
            <property name="hibernate.connection.password">****</property>
            <property name="hibernate.connection.url">jdbc:mysql://192.168.0.1:3306/library?createDatabaseIfNotExist=false</property>
            <property name="hibernate.connection.username">DB_Programmatic_User</property>
            <property name="hibernate.show_sql">true</property>
            <property name="hibernate.dialect">org.hibernate.dialect.MySQLDialect</property>
            <property name="hibernate.search.autoregister_listeners">false</property>
            <!-- <property name="hibernate.hbm2ddl.auto">create</property>  -->

            <mapping class="de.domain..bluemix.jpa.entities.Actor"></mapping>
            <mapping class="de.domain.bluemix.jpa.entities.DVD"></mapping>
            <mapping class="de.domain.jpa.entities.Genre"></mapping>	
      </session-factory>
      
      
</hibernate-configuration>

The second thing I did was create an application listener with static information for creating sessions.

package de.domain.mysqltrial.services;

import java.io.Serializable;

import org.hibernate.SessionFactory;
import org.hibernate.boot.registry.StandardServiceRegistryBuilder;
import org.hibernate.cfg.Configuration;
import org.hibernate.service.ServiceRegistry;

import com.ibm.xsp.application.ApplicationEx;
import com.ibm.xsp.application.events.ApplicationListener2;

public class PersistenceManager implements Serializable, ApplicationListener2 {

      private static final long serialVersionUID = 1L;

      private static SessionFactory sessionFactory;

      private static boolean init = false;
      
      public static SessionFactory getSessionFactory() {
            if((sessionFactory == null) || (sessionFactory.isClosed())){
                  throw new IllegalStateException("Session Factory is null or closed!");
            }
            return sessionFactory;
      }
      
      public static boolean isInit() {
            return init;
      }
      
      private void init(){
            if(!isInit()){
                  try{
                        System.out.println("Initializing Session Factory");
                        
                        Configuration conf = new Configuration().configure();
                        ServiceRegistry serviceRegistry= new StandardServiceRegistryBuilder().applySettings(conf.getProperties()).build();
                        sessionFactory = conf.buildSessionFactory(serviceRegistry);
                        init = true;
                  } catch(Throwable t){
                        t.printStackTrace();
                  }
            }
      }

      public void reInit(){
            destroy();
            init();
      }
      
      private void destroy(){
            System.out.println("Destroying Entity Manager Factory");
            init = false;
            if(sessionFactory != null)	sessionFactory.close();
            sessionFactory = null;
      }

      public void applicationCreated(ApplicationEx arg0) {
            init();
      }

      public void applicationDestroyed(ApplicationEx arg0) {
            destroy();
      }

      public void applicationRefreshed(ApplicationEx arg0) {
            reInit();
      }
}

The point of the application listener is to make sure clean up is done correctly and to make sure that everything is initialized correctly.  It probably is not needed in this way, but I found it at the very least to be a cool idea. This class must also be registered. Here is a screen shot with the location of these files.

package

 

 

 

 

 

Primary Problem

This configuration worked…. almost. I kept getting security exceptions. The runtime was not being granted the permissions it needed to run the code. Only after I added the following lines to the java.policy document was I able to get the code to execute properly.

permission java.lang.RuntimePermission "getClassLoader"; 
permission java.lang.RuntimePermission "setContextClassLoader"; 

permission java.util.PropertyPermission "jboss.i18n.generate-proxies" "write"
permission java.security.AllPermission;

This is a situation that I find sucky. It is alright for a test environment, but I would not want to mess with the policy file for an end-customer. My question is, does anyone have a possible solution for this?

Conclusion

It is possible to use JPA and it works well,  but I am not happy about the wide-open security window that was necessary in order to get it to work.

One more time, thank you to those on Slack who gave me a hand…

Toby Samples, David Leedy, Jesse Gallagher, and others who had hints…

 

Attachments and Java Beans

Up until this point, I must admit that I have been lazy. Even though most of the XPages I have created in the last two years have made extensive use of Java Beans, I have left the attachments to the XspDocument and the typical upload and download controls. I did not want to open that can of worms and just wanted to stick with what I know works. Well, that is dumb. It is fine for one or two minor applications that are never going to be used anyway, but when it comes down to it, I want it to be correct. Today, I started that adventure and, with all new things, a google search was performed for anything that could help me and point me in the correct direction. (Honestly, what did you old folks do without search engines? I’d be lost! Or at the very least spending 10 hours a day at the public library!) What I did not find was a document that contained upload and download information. Since I do not want to loose what I found today, I decided to write a quick post. Thank you to everyone that I am stealing from to write this…. 😛

The first thing that I noticed was that my concept was faulty. At least I think it was. I wanted to have a single file in my Java Bean that I could upload and download at will and access and save in my DAO layer. Of course I could be mistaken, but it does not seem to work that way. Uploading documents and downloading them again needs to be performed in two different actions, and in two different ways, and with different objects. Futhermore, I do not even offer both functions on the same page, though both are possible with the same bean.

First off, my test bean is very simple. If I were to extract an interface (just to get a quick look at what the bean contains, it would hold the following information

public interface IFileTest {

/*
* This function contains the information to save the document. Normally, I do this in a seperate DAO layer Object.
* The example that I used had the majority of the information in one single class.
* I did not experiment as I wanted to keep everything as simple as possible. Such experiments are further on my to-do list.
*/
public abstract void save() throws Exception;

/*
* This function contains the logic to download the attachement. It is performed in a separate XPage containing only this function call.
*/
public abstract void downloadAttachment() throws Exception;

/*
* This function returns a string that points to the xpages with the download attachment function call.
*/
public abstract String getDownloadURL();

/*
* This function will read the parameters from the URL in order to initialize the data for the viewscoped bean.
* Normally with the Beans I create, this function will access the DAO and set the data in an object contained by this Bean.
* This Bean is a controller in the MVC design pattern.
*/
public abstract void init();

/*
* I just find this helpful.
*/
public abstract String getUnid();

/*
* com.ibm.xsp.component.UIFileuploadEx.UploadedFile. This object is used ONLY in the upload core XPage control.
*/
public abstract UploadedFile getFile();

public abstract void setFile(UploadedFile file);

}

 

As i said, this test is done with as simple a construct as possible.

After this was completed, I worked on uploading a document. It seemed the most logical starting point. My primary source for this was a StackOverflow question posted by David Leedy so, note that the following code is primarily coming from Mark Leusink, the accepted answerer of Mr. Leedy’s question. The first part is the most simple. I have a property in my Bean that is of type com.ibm.xsp.component.UIFileuploadEx.UploadedFile . I have a corresponding getter/setter pair. I use EL to link the core control for uploading data to the bean. The real magic happens in the save logic.

public void save() throws Exception{
    
    /* I use the openNTF Domino API (ODA) for nearly all of my
     * applications. If this was not the case, we would have to worry about
     * proper recycling. Keep in mind that this is also just a test. Normally
     * my routines have much cleaner error handling.
     * 
     * The following statement uses a utility that I built that helps me get
     * key objects. I am assuming here that you know how to get a handle on the current
     * document. 
     */
    Document doc = ODASessionHelper.getCurrentDatabase().createDocument();
    doc.replaceItemValue(FIELD_FORM, FORM_NAME);
    
    // file is and instance of UploadedFIle. This is the Property that is bound to the core FileUpload control
    if(file != null){ 
      IUploadedFile fl = file.getUploadedFile();
      
      //File is the standard java.io variant.
      File file = fl.getServerFile();
      
      String fileName = fl.getClientFileName();
      // this gave me ONLY the name of the file without any path information.
      System.out.println(String.format("clientFileName: '%s'", fileName)); 
      
      // on my system, this gave the character ";"
      System.out.println(String.format("seperator is '%s'", File.pathSeparator));
      
      // This gives you the location of the file that was uploaded by the control on the server.
      File realNameFile = new File(file.getAbsoluteFile() + File.pathSeparator + fileName);
      System.out.println(String.format("realFile name: '%s'", realNameFile.getAbsoluteFile()));
      
      boolean renamedFile = file.renameTo(realNameFile);
      if(renamedFile){
        //typical code to attach a file to a document.
        RichTextItem body = doc.createRichTextItem(FIELD_BODY);
        body.embedObject(EmbeddedObject.EMBED_ATTACHMENT, "", realNameFile.getAbsolutePath(), null);
      } else {
        throw new Exception("file could not be renamed");
      }
      doc.save();
      
      /*
       * Normally at this stage, I save the UNID so that I get that document again
       * to prevent a bunch of new documents being created.  This is just me being 
       * lazy and wanting to get a test out ASAP.
       */
    } else {
      throw new NullPointerException("file was null");
    }
  }

The only issue that I have with the above code is that the new name of the attachment is a bit messed up. I do not know if it is because the operating system is windows, or if it is because of the domino version, but the attachment name is changed to “_<strangenumbers>tmp;realAttachmentName.txt” . This is because File.pathSeperator is a semicolon. I have a workaround for this in my download function, but a workaround is still only a workaround.

As I previously said, I did not find a post with both upload and download functionality explained. I did find an awesome article on openNTF regarding downloading attachments programmatically. So, here is a quick shout-out to Naveen Maurya who posted the XSnippet. In the example provided, an XPage was built which called a server-side JavaScript function which got a handle on the FacesContext and the server response to download all files in a zip file. I just edited this to be run in my Bean and not in JavaScript.

/*
   * same disclaimer. I wanted to this quickly. Normally my error handling is 
   * significantly better. I just want the theory here.
   */
  public void downloadAttachment() throws Exception{
    Database db = null;
    Document doc = null;
    
    //java.io.OutputStream;
    OutputStream stream = null;
    
    // java.util.zip.ZipOutputStream;
    ZipOutputStream out = null;
    
    // java.io.BufferedInputStream;
    BufferedInputStream in = null;
    
    try{
      if(StringHelper.isNullOrEmpty(getUnid())) throw new IllegalStateException("Unid is null");
      
      // again, I am using ODA, and this is just a way to get the current database.
      db = ODASessionHelper.getCurrentDatabase();
      
      /*
       * I normally do this in multiple steps.
       * 1. try to get the document with the UNID
       * 2. try to get the document with the noteID
       */
      doc = db.getDocumentByUNID(getUnid());
      if(!doc.hasItem(FIELD_BODY)){
        throw new IllegalStateException("body not located");
      } else {
        Item item = doc.getFirstItem(FIELD_BODY);
        if(!(item instanceof RichTextItem)){
          // I would assume that I would have to come up with a MIME variant as well.
          throw new IllegalStateException("item is not of type richtext");
        } else {
          // normally I ask if item is instanceof RichTextItem
          RichTextItem body = (RichTextItem)item;
          
          Vector objs = body.getEmbeddedObjects();
          if(objs.isEmpty()){
            throw new IllegalStateException("body has no objects to download");
          } else {
            
            ExternalContext extContext = FacesContext.getCurrentInstance().getExternalContext();
            // javax.servlet.http.HttpServletResponse;
            HttpServletResponse response = (HttpServletResponse)extContext.getResponse();
            
            response.setHeader("Cache-Control", "no-cache");
            response.setDateHeader("Expires", -1);
            response.setContentType("application/zip"); // change this for different types.
            // I gave a static name to my zip file, but the original code was dynamic
            response.setHeader("Content-Disposition", "attachment; filename=Attachments.zip");
            
            stream = response.getOutputStream();
            out = new ZipOutputStream(stream);
            
            for(EmbeddedObject att : objs){
              in = new BufferedInputStream(att.getInputStream());
              int length = in.available();
              byte[] data = new byte[length];
              in.read(data, 0, length);
              String nm = att.getName();
              
              /*
               * This is my workaround for the file names. Although they are saved in the document
               * with the incorrect name, I could at least download them again with the proper name.
               */
              ZipEntry entry = new ZipEntry(nm.contains(";") ? StringHelper.rightSubstring(nm, ";") : nm);
              out.putNextEntry(entry);
              out.write(data);
              in.close();
            }
          }
          // cleanup should be done properly.  this is a 'do as I say, not as I do' moment.....
          out.flush();
          out.close();
          stream.flush();
          stream.flush();
          FacesContext.getCurrentInstance().responseComplete();
        }
      }
    } catch(Exception e){
      // very nasty error handling....
      e.printStackTrace();
      throw e;
    }
    
  }

In conclusion, I have a test XPage application with one form, and with two xpages. The one xpage allows saving attachments. It has the File Upload control available by the XPages core and a save button. The second XPage is only used for downloading the attachments. It holds no content, but gets the file to download via the HTTPServletResponse in the beforeRenderResponse XPage action. The UNID of the document is passed with the URL.

Although not implemented in an xpage, I also built the logic to open the URL in a new window using client side javascript:

window.open("#{javascript:FileTest.getDownloadURL()}", "_blank");

FileTest in the above example is the name of the bean as configured in the FacesConfig.xml file.

My next steps would be

  • to build a view with which I could display the file names and other typical information available for file downloads
  • export files without being compressed into a zip file
  • it goes without saying that I would have to refine the above functions and build in proper cleanup and error handling

Happy Programming!!




OutOfMemoryError Follow Up

After spending a great deal of time testing, looking… showering…. I finally managed to locate an error that was causing the problem, but this requires knowledge of how the JVM works in order to fully understand.  At its core, however, is what I see as a bug in the domino API. (NOT ODA!)

Just to quickly go through the environment: we are dealing with a domino 9.0.1 server, running the ExtLibs, as well as ODA. The application in question is only using the ExtLibs, although ODA is installed onto the server. It is not listed as a project dependency.  ExtLibs is used only to get the current database. A fixpack round about 42-ish is being used, I do not have the number memorized.

To reproduce the problem, I created a database with only two xpages, and only to methods.  The first method created 120,000 documents.  Each document had only two fields that were manually set: Form and Status.  To set the status, I used creatingDocNum % 3 to makes sure that a third of all created documents had the same value. We should have 40,000 documents with the status set to “0” and so on.

The next XPage executed a search over these documents looking for all documents with that form name and the status “0”.  As stated, there would have to be 40,000 hits in the database. When performing a lotus.domino.database.search(String, Date, Integer), we are returned a lotus.domino.DocumentCollection.  The getCount() returns 500, (I used 500 as the maximum document count). When iterating over the documents, I put the universal id (toUpper()) in a hashmap, as well as counted the iterations. After each loop, I requested how much memory was remaining.  Once a certain minimum value was reached, I jumped out of the iteration loop.  I printed the hashmap size, the iteration count, and the value returned by the getCount() of the collection object.  I was well over the desired 500 document count (anywhere between 1500 and 6000 depending on the memory available) and the getCount() always returned 500.  A PMR has been opened for this case.

My work-around is two-pronged.  The first bit is easy.  I simply jump out of the iteration when enough documents have been iterated over. The second bit is that I constantly check how much memory is free.  Once I hit a minimum, I also jump ship. The appropriate message is displayed to the user and he can refine the search, or try again later.

But this is sadly not enough for my ‘want-to-know-everything’ attitude (though in reality I can never know enough), during my testing I found that the available memory always was set back down to 64 MB….

Here is the point where JVM knowledge is paramount. The Runtime always wants to make the smallest memory footprint possible. To that end, when garbage collection is performed, the amount of memory available to the runtime is recalculated.  If the available memory is small enough, it will allocate more memory up to the maximum configured value. If there is a bit of free memory, then it will lower the available memory. All well and good…. Normally… What if we have a few users who go in and start a massive search at the same time… because that is what they are there for. Call up their data and have a good day. We could enter a situation where that 64 MB RAM is just not going to cut it. Furthermore, because these massive calls are happening simultaneously, we just entered a situation where the runtime is not going to allocate enough memory fast enough. Even though we set the ini to use a maximum memory of 512MB, we are getting an OutOfMemoryError at only 64MB.

Enter the XPages Gods who have not only mastered development but are more than their hips deep into domino administration…..  (in other words google and some awesome blogs…)

LET ME SAY THIS WITH EXTREME CAUTION!!!

Setting HTTPJVMMaxHeapSize=512M is not enough.
Setting JVMMinHeapSize= 128M may be necessary.

I am always very careful before saying that we need to allocate more memory. This is because of how domino works. I go through a checklist to verify the following:

  1. We are not throwing more memory at a memory leak. (are we using recycle() appropriately and correctly?)
  2. How many XPage applications are using on the server (each App runs a new JVM [normally])
  3. The server host can handle it, i.e. enough RAM is physically installed onto the machine.
  4. The problem is not limited to a situation that can be fixed another way that also makes sense.

As a side note, I have found this this error occurs whether or not the openNTF Domino API is used or not. Naturally I have spent more time reproducing the error for IBM with their API than with ODA.

So there we have it. A nice little bug that has been handed over to the guy with a fly-swatter. Happy Programming!

EDIT

The OutOfMemoryErrors were a result of processing the documents and putting fields of the documents into java objects that were then stored in a List in the view or session scope. The OutOfMemoryError was not a direct result of performing the search, but rather caused by the bug: the search delivers a DocumentCollection object that has more documents than it should and the getCount() method that returns the desired result, not the amount of documents that are actually in the collection.

Memory: A little fucker that dies before its time AKA Crap-Filled bowling balls

As anyone who knows me can tell, I take great pride in every application that I work with.  Since spearheading my company’s XPage development starting in 2010(ish), I have developed, analyzed, and fixed numerous apps.  They are like little children that send off into the real world.

So when I get reports that one of them is misbehaving, I get real defensive, real fast. It is, unfortunately, my downfall. However, every app that I need to re-evaluate is a learning potential and I do treat it as such.  *Spanks the bad app with a vegance*  (Ok, not really)

Bad jokes and terrible ideas later, I will get into the issue at hand.

Lets call this app ‘Waterfall Workflow’ or WWF. WWF is an application where I can expect a peak of about 800 users concurrently at its absolute extreme maximum. Normal operations should be about half of that number. Users sign in to a main application which holds no more than configuration information and which is responsible for the XPages and coding. Coding is done primarily in Java. All code uses ODA, or as it is officially named, the openNTF Domino API, or just THE API!!! (It all depends on who is speaking 😛 ) Hi Paul, David, and Nathan!!! It also makes heavy use of the Extension Libraries, but lets forget that for a moment.

The data is contained on about 4 separate .nsf databases. Each database has a specific function, i.e. Labels and Languages, Primary Data, Global configurations and database instances, etc.  Because I do not want to build a database connection for every little piece of the puzzle, I lazily add every piece of configuration heaven into a cache in the application scope.  This is done through a series of ‘Controller’ type java classes.  No worries, I do not load every possible piece of scrap into its own AS variable.  Everything is neatly organized!!!  (Your pride is showing….  oops *zip*) The primary data is obviously not cached…. why should it be….

So all is fine and dandy until I decide to build an advanced search. Should be easy, right???  Yeah, why not. So lets look at my solution and take a look at some more specifics of WWF.

  1. We are dealing with an application with heavy Author/Reader field usage.  (well there goes performance, but there is not too much I can do there…. I think…)
  2. We are dealing with approximately 60,000 documents worth of primary data. (Remember other information is stored in cache as fetched from other nsfs)
  3. Each primary data document may hold a single attachment, and every primary data document may have a separate document (linked via a key) containing up to 10 attachments. This gives us a max total of about 120,000 possible documents where the actual value is likely closer to roughly 80,000.
  4. The search is done in such a way that the query could have 20,000 hits or more. (theoretical)

Productive Server Info

  • 2 XPage applications are run on this server.
  • we are dealing with 64 bit Windows and 64 bit Domino 9.0.1 FP3 and some odd-numbered hot-fix pack
  • October 2015 ExtLibs, and ODA

The implementation of the advanced search is a pretty simple. A user gets the possibility to select values for up to 10 or so fields contained in the primary data document.  (There are a total of about 120 odd fields per document) Dependent upon the users’ selection, a DbSearch is performed after building a query string.  Although I cannot remember why, I know that a full text index of the database was not built, and one is not desired.  The DbSearch is set to return a maximum of 500 documents. Depending on the selection of the user, a further search is performed on an archive which contains old data.

As previously stated, all actions performed using domino data is performed using THE API (ODA). This of course includes the search. Once the search delivers the document collection, an iterator is created over which the documents are read out one by one into java objects which is then stored in a list.  These java objects contain roughly 15 string attributes, and we are talking about a maximum of 1000 returned documents. (2 searches, each returning 500 documents)  This is nothing ground breaking. This list is stored in a session scoped controller (so that the view can be closed and re-opened without performing a search a second time). We found no issues testing with up to 10 people in the testing environment.  We let this functionality go live, and BAM!!!!!!!!  OutOfMemoryErrors hit us (ok, hit me) like a ton of crap-filled bowling balls and I still cannot get the stench off of me. Design restore. Wash. Rinse. Rethink…..

Since the design update included numerous little changes, I first had to localize the problem.  JMeter to the rescue in our confined QA environment which (as far as I can tell) is a 1 to 1 mock up of the final server. Same OS, same hardware specs, same (at least where it counts) config.  Same OSGi Plug-ins.

After setting up a test plan where x dummy users login, go to the search page, submit a search request via AJAX, I thought it would be a good idea to set x to 100 users. (All of which are using the same credentials, but by checking the cookies they are all on their own individual sessions) No more than 10 search requests were submitted before BAM!!!!!  Another ton of crap-filled bowling balls.  Server restart, Wash, Rinse, Repeat.

So, where am I going wrong then?

I quickly build another app in the QA system containing one xpage, no configuration cache, and only a DbSearch and a dummy java object being saved in the session scope. So far, only ODA was tested, and the same function construction was emulated. (obviously without the extra final version finesse) Same problem.  Next step, find out which step in the code is causing the error, and by the way, lets cut it to a simple 5 or 10 dummy users.

Before I go further, I want to explain the princess that is the JVM. She has maximum memory (this is how big her brain is), she has an available memory (how much she is willing to give you at the moment, but she’ll give you more if you need it), and a used memory (how much she is actually thinking about your lovely self). Lets expand this into the domino world, and we have two notes.ini variables with which we can play. HTTPJVMMaxHeapSize and its buddy HTTPJVMHeapSizeSet (or whatever). On a 64 bit system, you can play with this a bit. Its default is 256M, referring to a total maximum runtime memory of 256MB, and its buddy, when set to 1 (as far as I know) tells domino not to reset the max heap size.  Don’t quote me on that though, it has been a while since reading Paul Wither’s awesome XPage book.

After every critical call, and after every 100th document being iterated over, I printed:

  1. free memory calculated to MB
  2. total available memory calculated to MB
  3. maximum memory

From beginning on, I only had about 35 MB of memory, 64MB available, and a total of 256MB. I played with the setting, going up to 512MB, and then a total of 1024MB. I found a few interesting things:

  1. Viewing the task manager resource/performance panel, the memory usage on the server never exceeded roughly 4 GB of the available 16 GB RAM.
  2. The available memory never exceeded 64MB
  3. the free memory (ok, i obviously was not seeing every milli-second’s value), never went below 5MB.
  4. On the server console, the iteration looping continued although I was also reading the bloody OutOfMemoryError crap-filled bowling ball message.

I am left with an interesting challenge.  What is the cause of this stupid bowling-ball shower? The following thoughts are going through my head…

  1. Is a domino configuration setting messing with me, and is that why the available memory is not increasing to match my current needs?
  2. Am I doing something wrong with the loop?
  3. Is it possible that the problem is not with me, but with my tools?
  4. Is it possible that ODA cannot recycle the objects fast enough to handle 10 concurrent requests to perform a function which does a dbsearch over approximately 80,000 documents and returns a maximum of 500?
  5. Is it possible that the OSGi Runtimes are not getting the memory they need to run?  If not, why would that not take the same value as is written in the notes.ini?
  6. What the fuck am I missing?
  7. How do I get the smell of crap-filled bowling ball of me?  Does tomato juice work?

As you can tell, I am still trying to figure this out. I don’t expect you to have learned anything from this, but at least I got my thoughts out.

I am going to try taking yet another shower.

 

Connect16

If you are following me on twitter, you will know that I have had my first tastes of the Lotussphere / Connect experience.  It has been quite a day, and I could not be more ecstatic with some of the information I am receiving. The ideas that I am seeing are astounding and the people are wonderful. In all honesty, I am incredibly grateful for the opportunity to come here. I have shaken the hands of people that I have held in incredibly high regard, as well as even had the opportunity to meet someone that I was able to help with my blogs and videos (as bad as they might have been….) I am not certain at the moment how much information I am allowed to just blurt out on this blog, so instead I am going to write a quick evaluation of the session I attended.

The general morning sessions with IBM were very entertaining.  Acting and role playing, promises and announcements… It was a fun and the time went quick. But…  But…. It reminded me of a stay at Disney World.  Lots of fun, lots of pretty facades, but no background.  Of course, how can you go into background information during a sales pitch, and therein lies my problem with those sessions.  It was a sales pitch.  If we needed a sales pitch, we would not be at Connect.  (I think). Dont get me wrong though!!! I loved what they were trying to sell!  IBM Verse with Watson and all of the Cognitive Business ideas are something that I am really going to have to research, and I am definitely interested. But I was really missing was the part where they mention the cons. Hours of pros and no cons leaves me with a twitching eyelid and a stiff neck. It leaves me wondering where the caveats are. So let’s just disregard the obvious sales approach and lets get into the real reason why we all came.  Let’s get into the sessions with the interesting people.

My first ‘real’ session of the day was entitled ‘OpenNTF – From Donation to Contribution’ done by Christian Guedemann. It was a small room, but comfy. (how sad is it that my browser does not recognize comfy as misspelled!) #disturbing!!!! Anyway, The general topic was getting involved. I have been a member of openNTF for quite some time now, and I really want to get involved, but other things seem to always get in the way. Mostly I am just lazy and lack original ideas.  [Did I just really write that?] Well, he went into the main shift in the way openNTF works over the years. How people mostly just threw their old stuff onto the site, trying to save it from simple deletion, and to its new main focus of community involvement and getting to contribute in any small or large way. Yeah, monetary donations are nice, but really getting involved is a much better way to contribute. The best part of this presentation was being shown how the community does not need to just contribute code, but rather can contribute in pure theory.

Let’s take me for example.  The lazy one…. 😛 who does not want to come up with a new idea for a project and write a lot of code can instead ‘only’ help contribute in words.  As talked about in a few sessions today, the XPage Knowlege Base is a wiki based online encyclopedia with everything we as a community need to know.  This wiki is going to be a collection of a ton of blogs and resources already completed, as well as people coming up with new stuff that needs to be documented.  You want to know more about ODA (openNTF Domino API)? [I think I coined that phrase and love how I see it everywhere here, though I actually call it O-duh, as in Oh, duh! I knew there was something else we needed] Go to wiki.openntf.org and get the info you needed.  You have a question not answered? Contribute the question!!!!!!! Let the community answer it. OF course questions about a specific problem still remain in StackOverflow, but topics not covered should be mentioned! This truly allows the community to get involved in every small way.  I love it.

As for the presentation itself, I enjoyed it.  The slides were well made, the information was well presented (both verbally and otherwise), and I only wish Mr. Güdemann gute Besserung and a good nights rest!

My Second session was with Nathen Freeman covering the topic of Graph databases. There I had another oh-duh moment, though not the good ODA kind, but rather the ‘oh that was embarrassing’ kind. Once You Go Graph… truly was the name of the session, and no, the app did not shorten it for size. **hangs head in humiliation**

Fun aside, using graph concepts for webpage development is nothing overly new, but when presented with it, and knowing we can use the technology with what is already installed on my dev Domino servers…..  Nathen Freeman, you challenged the way I look at developing for domino AND I LOVE IT. All I can say is, the more you know, the more you know you dont know.  (Yes I did post that to twitter) Well presented, explained in an understandable manner, good slides [kudos to your slide assistant], and now the question why no one else including me did not think of it sooner! I cannot really say much more to this session because I have a lot of work to do to fully comprehend this technology. I can only add #ODAForTheWin.

My second last session for the day was Optimus XPages which focused on best practices. This was done by John Jardin. Very nice guy! As I have done XPage dev for a while now and have been hung by the learning curve on more than one occasion, it was great to see how another programmer has learned the software and found his own way. His suggestions for frameworks and just the ideas that he suggested are  something that I am going to be looking at in very great detail in the coming months. Again this was someone who challenged the way I think about how I work and what my projects are like. I may not have agreed 100% on everything (alright, only the single application design was something that I question) , but the information is invaluable.

My last session for the day was GIT ‘er Done presented by Henry Newberry. Judging by the attendee count, this was probably the most undervalued session of the day; which I find really sad. The presentation itself may have been a little sloppy, but the information contained therein was excellent. We talked about how we can use platforms like BitBucket and Git to add team SCM to our domino development processes. This again made me question the procedures that my company has in place. Why do we have to use .ntfs and keep backups somewhere to keep versioning in place, and why cant we simply use a SCM system like the modern git? Of course we can.  And this was probably the most useful bit of information I got today. So all you people who went to see the other big names, I feel good about my decision.  😛 Alright, I wish I could be in multiple places at once too, but I cannot.

I think the main title for my experiences today was, ‘Question why you do what you do’. I do not think I can put a better point to it. I will be thinking long and hard the next few weeks and perhaps longer about my style, my so-called ‘best practices’, as well as my work style. I am grateful to all of the presenters for giving me so much to reflect on.  To those I could not see yet, I am sorry, I will do the best I can! And again, many thanks to my company for sending me here. Looking forward to tomorrow!

XPage Opinions

<rant><bitching>

I had an interesting experience today that I would like to share. I met a group of Notes power users today for a small and informal gathering. The food was good, the alcohol was flowing (so was the soda), and we all started talking.  Some of us were developers, some of us were administrators, some of us were in management, all of us use Notes. We got on the topic of IBM stuff and I must say that it was a generally tricky topic.  The general topic of Notes was fine, but when it came to XPages…. Lets just say it was a rough time. As it turns out, there are people that love to hate on XPages.  I must say, I used to be one, but I converted…

There are a few arguments that I want to touch upon.  The first one is that it is impossible to take on projects done by others.  This is a very good point even if I am not in complete agreement.  Unfortunately, we were not agreed as to the cause of this trouble. One person said that the trouble is based solely on the different ways to “hide” the code. Yes, I will say that this is an issue, but I am also not sure we were talking about the same stuff.  General style when building XPage applications is problem, but it is not more or less problematic than in classic notes development.  The question of whether to use a script library or put the calculations directly into the Form design element itself is the same argument that I have with using JavaScript libraries and putting it into the controls themselves.  The correct answer is also clear, use the damned library… That is what it is there for. For me, this is also not a reason that taking over someone’s project should be difficult. Refractoring is needed, but if the project was done correctly, then chances are the original programmer is still working on it. Another possibility was the location of Java code.  There are a few places where you can place such code.  The Java Design element is usable (from what I have read), and some of us prefer to use the good old Web Content/WEB-INF/src folder located in the java perspective. THIS IS A STRENGTH IN XPAGES, SO STOP HATING ON THE FRAMEWORK!!!!!

Another reason I heard today to hate XPages was that the applications are slow or are unstable.  This excuse makes me angry… very angry. The applications are as stable and as fast as the programmer allowed it to be.  And for this point, I am going to point and laugh at every single corporate manager who says that it is cheaper and easier to outsource your development to overseas countries where you only have to pay a hundred dollars a day for programming services instead of using your next door neighbor who gets a thousand.  You will get what you pay for! Of course, we cannot all know from birth how XPage applications should be run, and it takes a while to learn the proper methods of using error handling (shout out to the dreaded NotesException), and learning how to properly recycle objects in order to prevent killing other functions, and building a proper cache and balancing CPU vs Memory…. These things are not easy.  Not every one can do this.  THIS IS NOT XPAGE’S FAULT, SO STOP HATING ON THE FRAMEWORK!!!!!

Another problem is our idea of what XPage development is.  XPages is a way to program modern web based applications which may or may not use a domino data source and which run on a domino server.  IT IS NOT RAD (rapid application development).  At the very least, it is not the RAD that many remember in classic notes.  Much more planing needs to go into XPage development.  A lot more skill is needed.  I wider range of skills is needed.  JavaScript, Java, DOJO, JQuery, GUI/XML, architectural skills…. not every Joe who knows Excel and spreadsheets can do this! It is still more rapid than building a JSF/JSP application from scratch, but a totally different ball park.  THIS IS MODERN APPLICATION DEVELOPMENT AND NOT XPAGE’S FAULT, SO STOP HATING ON THE FRAMEWORK!!!!!

Instead, let me tell you the awesomeness that is this framework.  This framework offers modern web based applications.  It offers a way to combine the tested and true Domino nature with the scalibility and efficiency of SQL.  It offers easy binding of third party software into your applications. A separation of data and GUI allows for a much more robust and rich application that does not create a dependency on certain servers or data sources but rather abstraction which allows almost anything. The deletion of a certain server does not require any complex desktop processes to make sure that the tile on the client is switched to the proper database instance.  More control is granted to the application design due to easy access to modern JavaScript libraries. Sharing of code between application instances with the use of self-built OSGi libraries running on the server enable build once – copy never functionality not seen before in notes applications…  The reasons and advantages of using XPages in your environment are manifold.  Don’t get frustrated at the first glance or first failure and say that the whole thing is shit.  Open your mind to what is now possible, change your perspective to see that this is not classic Notes, and learn the correct ways to use this framework.

</bitching></rant> …. doing is up to you…

holistic application management

I do not normally do marketing type stuff, but today I want to make an exception.

Here at holistic-net, we often are put in charge of tasks where we need to quickly find information about a company’s application environment.  Sometimes this is because we are planning a migration, and sometimes it is just because we need to see what is available on the server already before we start new development.  Other times, we lack access to certain databases, but we still need current ACL or last access information.  holistic-net application management (or ham for short) is the tool that I like to use.

Ill give a short example of when we needed it.  About two weeks ago, one of our customers decided that they wanted to shut down an existing server and move a few of the applications to another server.  We normally do not need access to those applications, so we were naturally not included in the ACLs. Had we had access to a current ham tool, we could have easily found out which applications were last used, we could have seen the ACLs, and we could have seen what possible replicas are found on other servers and in other domains.  This would have made the work we did in 8 man hours (not including waiting times for ACL changes and email/telephone correspondence), a 1 man hour job.  Another task that was not really possible was finding out who was responsible for which applications as well as other meta-data that we could not find in the catalog.nsf, or by other means.  ham offers a central place to keep all application meta-data clean and up to date.  All we would have needed was to be given access to the ham data application and we could have found what we needed quickly and easily.

Here is a quick excerpt from our website.  Ignoring the marketing jargon, as a developer who also needs to maintain the integrity of the customer’s servers, I find ham a tool that I cannot work well without.  Everything else is too expensive in the terms of man-hours needed for workarounds. For those of you who can understand German, I am including a demonstration video that one of my colleagues recorded.  Please write in the comments if you would like a translated transcript of the video.

If you are interested in trying this application on your servers, please write to me, or to holistic-net at   sales@holistic-net.de

ham

Administration over the complete company-wide application environment

holistic application management (ham) is the perfect tool for the following tasks:

  • Use this tool with our support for creating assessments to use as a foundation for a data migration.
  • Administrate and configure your entire company wide Domino / Notes application environment quickly, efficiently, cheaply, and transparently.
  • Automatically calculate your IT costs and link them with cost calculations .

holistic application management combines an application’s technical data, which is periodically and dynamically imported, with meta data, which is partially generated automatically and partially inputted manually. This information can be made available to many target groups in a company wide information portal.

 

Java Beans and funny stuff

I have such a backlog of posts that I have written and have neglected to post, it is not even funny.  I have a few videos that still need to be edited or redone and it is nuts. So first off, I am sorry for the long breaks in posts, but life tends to get in the way.

Today I was working with an apprentice and getting him involved with XPages.  He seems to enjoy it so far, and it is a help for me because I give some of the more mundane things to him.  He has zero Notes/Domino experience and has focused primarily on .NET development.  If you go to derultimativeligatipp.de, you can see a bit of his handy work.  Of course he worked with another apprentice of ours to build that site, and I must say together they built one awesome application.  But I digress.  Of course I think our apprentices are amazing, but we want to quickly discuss our adventures of the day with Beans and XPages.

As readers may or may not be aware, I have spent a great deal of time developing a standard java library that I use in all of my XPage applications for use both internally and for customers.  It includes custom controls, standard beans, and now a configuration.  But a simple configuration was not good enough for me.  Let me quickly get into my use case.

Up until now, I have been using a file saved under the Resources/Files tab of the project.  I wanted to get around needing profiles which can be a cached pain in the rear, I originally did not want to have a view to look up a specific document, and I did not want to touch the xsp.config xml file. Of course there are some wonderful snippets available from Paul Withers in case you would prefer that approach.  I wanted to save values that differ from database instance to database instance, as well as from dev version to QA version to productive version.  As far as I am aware, performing a design refresh also rewrites the config.xml file.  Really the best way to get the functionality I wanted was a good old fashioned NotesDocument, a good old fashioned view, and the wonderful ODA view method “getFirstDocumentByKey()”. #ODAOnceMoreForTheWin .

This brought on interesting points that I could discuss with the apprentice: abstract classes, Beans, and expression language.   I wanted to build the following structure:

AbstractConfig
-contains the view alias for lookups which will be used for all XPage configurations
-abstract init() method
-abstract getLookupKey() method
-a few other methods that are sensible for our use case but may not be needed for all.

AbstractDashboardConfiguration => AbstractConfig (just an example)
-implements init() to fetch the needed data
-protected final Strings for each field in the NotesDocument
-private members for each field value and the appropriate getter/setters

DashboardConfigurationDocument => AbstractDashboardConfiguration (just an example)
-save() implementation
-specific XPage rendering helper functions
-is set up as a viewScope variable

DashboardConfiguration => AbstractDashboardConfiguration (just an example)
-Specific methods to use a re-init interface that I built so that all of my applicationScoped beans can be reinitialized with a click of a button by an admin
-obviously applicationScoped

As you can see, the build up is pretty complex, and this is the easiest of examples.  There are probably a few “WTF?” questions that I should touch upon, so let me get to them first.

First off, I am sure the reason for an AbstractConfig class is clear.  When all configuration documents are already being saved in the same view, then why not.  Certain fields need to be set in order to guarantee that they are displayed in the correct view. Also, why should the name of the view be set in each configuration class?  It just makes sense to have a single abstractconfig java class.  But, the question that probably comes to mind is, why is there a need for another abstract class before the real implementation?

The answer is pretty simple: I hate StackOverflowExceptions.  I started to create two classes to handle configuration information.  One bean would be responsible for saving and editing the information (DashboardConfigurationDocument), and the other would be responsible for caching the information in the applicationScope (DashboardConfiguration).  Without the abstract class I am left with the following conundrum…..

It is clear that DashboardConfigurationDocument should get it’s information from the document… I mean….  it is sort of implied in the name.  It should also save itself. It then needs to inform the applicationScoped DashboardConfiguration bean that it should refresh its data. This data could be read from DashboardConfigurationDocument to get around needing to write the int() function twice.  Right there we have a problem because we have two classes that call each other.  It just makes the most sense that both of these classes have the same key functions and members in the abstract version, and the rest of the key implementation in the concrete classes.  It makes for a much cleaner implementation at the cost of hereditary chaos.  🙂   Truth be told I find it awesome.

The second major question that I should directly address is, why do I just not save the DashBoardConfigurationDocument bean in the application scope? Basically I am a control freak wanting to do crazy things.  No….  I assure you that I have a reason.  Let’s look at lazy admin A and multi-tasker admin B.  Admin A makes a single change, directly in the appScoped bean before going for coffee, and admin B gets a phone call after doing the same.  Neither are finished with their changes, neither of them had to save the changes explicitly, yet both of the have a change that is already potentially putting the application in an unstable state.  Baaaaaaaddddd  vooodooo….  baaaaaaaadddddd.  For this reason, I also like to separate my editing logic from my global configuration logic. Additionally, I can have XPage UI specific logic in the viewScoped class without feeling guilty about caching stupid spam members in the appScope bean.

I can use this pattern as often as I want, and I can be sure that I do not forget anything.  All of my field names are saved as final strings and I can use them in other sub-classes if I need to.  I can even decide later that I want to override my save function in another bean to get SQL support or whatever. It is just clean, and I like clean.

After taking some time to explain a lot of this to the apprentice, we dove into Expression Language and getting some good old binding done.  It worked like a charm…. almost.

This goes into another crazy use case.  I only wanted one configuration XPage.  I have an enum where I have specific configuration keys saved.  These values are then presented to the user in a combobox where a specific configuration key can be selected, and the document is opened, and the information is displayed.  We did this with Expression Language.  The ComboBox held string values representing each of the values in the enum, and the bean had the necessary getter and setter to take the string and set the enum value for that document.  This setter also triggered a process whereby the configuration was fetched and the values were refreshed.  It was a thing of beauty until we realized that the values were not being refreshed on the XPage although the values in the bean were being refreshed with the contents of the NotesDocument.  It took us two hours to figure this issue out.  The correct getters/setters were being called, the init() function was being called, the correct document was being retrieved, and the values were correct in the bean.  Why were they not refreshed on the XPage?

First off, I thought it was a simple refresh problem.  The errorMessages control displayed no errors, and I thought that it was just a simple case of needing to either perform a full refresh, a refresh directly on the component, or some other crazy thing.  We messed around without any success.  In the end, this was one instance where a simple EL expression just was not enough.  We saw that the EL expression was calling the correct methods;  the onChange partial refresh on the page was working correctly.  My suspicion is that the partial refresh was being performed faster than the methods called by the EL expression.  We took out the EL data binding and instead implemented a SSJS function calling the setter method for the configuration key.  When we did this, everything worked as planned.  We also now have one page that can be used for multiple similar configuration documents that can easily be extended without changes in the design.

Lesson learned:  Java is awesome, EL is cool, but ssjs still has to be used for certain things.

XPage Java Tutorial Demo Download

The Demo

My original plan was to put up seperate entries for the final two NotesIn9 videos and say a bit, post the code, and go on and on about stuff that I already mentioned, but as time goes on, and I notice that I am not getting the time to do this as I wanted, the time has come for me to just simply give you the file and let you have fun with it.  This also seems the best option because the code already has the javadoc entries to document everything that I would say in the posts to begin with. Take the nsf, open it up, read the code, and if you have any questions, comments, concerns, please feel free to write to me.  As the readme says, I stand by my work, but I also know that there are a few aspects that could be optimized, and I have in fact optimized a bit in the last few weeks in my productive version.

XPage Java Demo Download

Important Notes

In this version of the Java Utilites, we used the openNTF Domino API M4.5.  This is also the current stand of this Demo.  In the future, this will be upgraded to a new version of the API which eliminate certain bugs in the M4.5 version, such as iterating over empty document collection iterators.