A year (and a few weeks) have passed and I am at my second IBM Connect conference. Firstly, I am very grateful to be here, so THANKS BOSS!! (You know who you are, even if you wish you didn’t know.) Last year, I gave a very in depth summary of the entire day and really went into what I liked and did not like, but today I want to more or less write a few thoughts down for myself for a later time. Okay, since you asked so nicely, maybe I will give a brief summary of what was going on.
As is normal for these sorts of events, there is a somewhat longer opening session. Before I continue, it is important to realize that everyone who goes to these conferences has different goals. I do not work in marketing, and I do not work in sales. I am a programmer. A simple basement alien typing on the keyboard praying to the mighty golden hard drive that no one calls and I can just keep going… (or so I tell everyone… also, in case my boss reads this? ehm…. just kidding…) Getting to my point, these opening sessions are very much geared towards selling stuff. At least I feel it is. There are a lot of Buzzwords, and buzzword bingo… (seriously.. bingo after not 2 minutes? I call shenanigans on that one…), and of course there are tips on where the IBM road is going to take us in the months and years to come. It is not my place to tell you what is planned, and that is also not the point of this article, if there is a point to begin with.
Having said this, there are a few things to shout out… Huge applause to the one-woman band ‘Kawehi‘, who is also listed on the front page of the Connect News Today mini newspaper! It may not have been my type of music, but your energy and enthusiasm was key in bringing people into the mood of the opening sessions!
Also, special mention to Dr. Sheena Iyengar, who was the special speaker at this years opening. I think this woman’s part of the first opening sessions was the most intriguing, as well as the most important. Thank you for your insight.
Moving on from the opening, I attended two break out sessions today. The first dealt with use cases for cognitive connections. I expected more code, but it was still interesting. But the second session I visited was what really fired my imagination. Paul Withers and Christian Güdemann presented on GraphQL. I have to admit that I did not read the description of this session before adding it to my schedule. I expected to hear about openNTF and ODA upgrades regarding GraphDB. This was a topic that was discussed last year by Nathan Freeman. GraphQL has nothing to do with GraphDB. GraphQL is a layer above the storage layer. It is a way of transferring data from a server to a client. To that end, there is a provider and a consumer. The consumer would in most cases be a website, or in some cases the server side processing for a web application. The provider would most likely be on the server hosting the database. It does not have to be (in theory), but I could imagine that it would make data retrieval that much faster.
As everyone who reads this site is aware, I am primarily involved with XPage development. Furthermore, this summer does not mark only ten years of living in Germany, but also 5 years of being a full time State Certified Programming Professional, and 8 years working at holistic-net GmbH including my apprenticeship. I am the main responsible for two large applications. The first I built from the ground up using a pre-existing Notes Application consisting of multiple Notes databases. The second I inherited and did my best to rebuild with a very limited budget. If there is one thing that I have noticed, it is that the biggest issue is with retrieving data quickly. I have had numerous issues with full text index searches hitting performance walls like a drunk in a labyrinth. What is worse is that there has never been any reproducible pattern. Everything is great and then *faceplant* (followed by the request for another beer). So I find the idea of a new search provider very appealing. That is, if it can work with domino…
One thing that was mentioned in this session, is the ability to use tools from darwino in order to implement a domino graphQL provider. I must say right now that I have not looked into this tool myself yet. There were also a few caveats to this tool that I would need to verify before risking to suffer anyone’s wrath, or getting anyone into trouble when in the end, I just remembered something incorrectly.
What I am taking away from today, is the idea of creating a better way to search for data in domino and provide that data to any consumer using graphQL, and then consume that data by any front end that wants it, whether it be ASP.NET MVC or XPages while making development as quick as possible for everyone. And one thing is clear. I have a lot of research to do…..