The ChatGPT Programming Experience: A Second Attempt

Welcome back to my series on using ChatGPT for programming (written with help of ChatGPT)! In this second installment, I’ll share my experiences using ChatGPT to build a small software project.

Here you can find the chats: https://github.com/juangamnik/chatgpt-schedule/tree/main/chatgpt

I set out with the premise that I would not write more than a few lines of code, leaving most of the work to ChatGPT. Let’s dive into the numbers, the areas where ChatGPT slowed me down, and the moments where it truly shined.

Read More

Large Language Models, GPT, and I

This is the first blog post in a row that describes my first experiences with ChatGPT as a pair programmer and assistant for a developer. As you will see, these experiences had some ups and some downs, but all in all – spoiler alert – on the one hand is the advancement in A[G]I (Artificial [General] Intelligence) especially in regard of NLP (Natural Language Processing) using LLM (Large Language Models) and GPT (Generative Pre-trained Transformers) very impressive. On the other hand I had to revise some of the observations (and criticism) I made, just weeks later, since the pace of evolution in the AI scene is so freaking high.

So these are the blog posts of this series:

  1. This post
  2. The ChatGPT Programming Experience: A Second Attempt
  3. The Evolution of Large Language Models in Programming: A Broader Perspective

But let’s start at the beginning (the following text has been created with help of GPT-4)…

Read More

Spring Boot Passthrough JWT with RestTemplate

In a microservice environment it is often the case, that calls from a client to a service result in further calls to other services. One possible scenario is a call to a GraphQL service which gathers information from different backend (REST) services and present it as a cohesive data graph.

In this scenario the user is authenticated to the backend services via OAuth2 (e.g., Keycloak or a Spring Boot OAuth2 server) and the GraphQL service should passthrough the authentication header (a JWT bearer) of incoming requests to the backend services. This way the authentication has to be validated only once in the backend services and as “near” as possible to the (REST) resources.

This is not meant as a replacement for service-to-service authentication, but as an addition if you do not use the full OpenID connect standard with a separate identity token to pass on, but still want to serve verifiable user data to your backend service. In contrast, you may use this to pass through any header (including a identity token). This is just a scenario that I faced.

Read More

Repair a Damaged Package System after Ubuntu Dist-Upgrade

Happy new year.

My blog runs on a VM at Hetzner with an Ubuntu LTS system. That means 5 years of support… I was running trusty from 2014, so there should be support until 2019. But not every open source software has given you this promise, just the Ubuntanians. So, support for Owncloud run out last year and I thought that the days between years are a good time to switch to a new version.

Hence, I did two dist-upgrades after another from trusty to xenial and from xenial to the current LTS version bionic (every 2 years a new LTS version is coming out). The first upgrade was “successful” with a lot of need for adaption in the configurations afterwards. Then after everything worked again, I did another upgrade, which failed because of this issue.

You do not want your system showing you such a message during do-release-update.

That is, I had to fix a distro upgrade that failed in between… challenge accepted 🤓.

Read More

Change c-time on Unix-Based Systems Based on Filenames

For quite some time I have a paper-free office (at home). I still physically file the papers I get, but in addition I scan all the paper documents, tag them and put them in a folder. I use a very easy system. For the very recent documents (and the ones work in progress) I have a draft folder. Furthermore, there is exactly one document folder per year and I store everything in there (incoming and outgoing documents, scanned ones and ones that I get mailed, even some printed to PDF emails for document-like emails). Each file has a common naming scheme. There is one part that is relevant for this post: at the beginning of each file I put the date of the document in the format YYYYMMDD. This way, the documents are ordered chronologically in a year, if I sort them by name. There is a lot more to my filing system and if someone is interested, please leave a comment, but for this post, this should be enough about my way of filing documents (digitally).

The issue I would like to address here is, that the date when I scanned a file and the “real” date of the document diverges. Sometimes it even happens, that the creation time of two scanned files are in “the real world” in one order, but the scan-/creation time is the other way around. I do not like this situation. Therefore, each year when I “finish the year”, I run a script (on macOS), which adapts the ctimeto the date-part in the name of the file (a one-liner, which I put on 5 lines, for better readability):

find . -name "2017*" | while read file; \
  do thedate=$(echo "$file" | \
  sed -E 's/^[^0-9]*([0-9]+).*$/\1/'); \
  touch -t ${thedate}0000 $file; \
  done

If you have another unix-based System with sed you can use -r instead of -E. I am unsure why this option behaves differently on macOS although I installed (and use) GNU sed installed via home brew.

Exciting 🤓.

Creating an Alpha Channel Video with Final Cut Pro X

Lately, I faced the task to make a long-term-support (LTS) backup of a FCP X “green screen” video project. I had two constraints:

  1. The result should take a “small” amount of disk space, only.
  2. It should be possible to alter background, text effects, image effects and so on without loosing quality.

The original data took about 500GB, because of junk takes. But, I wanted to store 3050GB, only. Unfortunately, we chose to take very long shots, so that it was not easy to remove the junk from the FCP X project file (there are paid solutions for this, but the ones I found do not work with combined clips). Just rendering the video in good quality would solve constraint one (C1), but it is hard to make changes to such a file (C2).

Therefore I chose another road: Render the green screen scene with an alpha channel as one long video, in order to sort the wheat (good video) from the chaff (bad video). Using the original green screen instead of an alpha channel was not an option, since I animated the green screen video channel (e.g. moving it from the left to the right) which added black “letter-boxes”. An easy solution would have been to use the keyer to remove the green screen and add a new one via a green background (i.e., an artificial green screen). This can be rendered to a video and a second keyer can be used after a reimport. But this seemed kind of lame (in terms of unprofessional) to me. I wanted a cool alpha channel video as I have seen it in the making of of several films, with an additional video (channel), which is just black and white containing the alpha information.

So, I googled and there was surprisingly sparse information on this topic. There are some formats that should be able to contain an alpha channel like Apple Animation, Apple ProRes 4444, but they all have in common, that they take an unbelievable high amount of disk space, which violates C1 (and they didn’t work for me…). I didn’t find any HowTo or tutorial in THE INTERNET that could help me.

Challenge accepted. 🤓

Read More

Splitting up “Streams” using jOOλ, Kotlin, and ReactiveX

The Java 8 Stream API is pretty cool as you can see in my last post BFS with Streams. But there is a catch. You cannot reuse Streams. Consider the following code Java-Code:

final List<String> helloList =
    Arrays.asList("H", "e", "l", "l", "o", ", ", "W", "o", "r", "l", "d", "!");

final Stream<String> helloStream = helloList.stream();
final Predicate<String> checkUpper = s -> !s.isEmpty()
    && !s.substring(0,1).toUpperCase().equals(s.substring(0, 1));
helloStream.filter(checkUpper);
helloStream.filter(s -> !checkUpper.test(s));

This results in IllegalStateException: stream has already been operated upon or closed.
The problem with Java 8’s Stream API is that they are designed for parallel execution. This decision introduced some constraints. Unfortunately, there is no sequential only-switch. What you can do is collect the results with groupBy into a map and then create two new streams from that. But collecting is a terminal operation, not lazy, and therefore inefficient (especially in combination, with early-exit operations like limit). You can also try to do the first filter, chain it with a peek, and finally do the second filter. But since only elements matching the first filter will reach the second filter (i.e. a && !a which is equal to false), you won’t get any elements past the second filter. If you have a so called cold source (i.e. like a collection), you can just use two different streams which results in two iterations. But for hot sources (like a network or file i/o stream), this is not that easy. A possible solution is to cache the input in a collection, i.e., cool it down. But this comes with a space and performance penalty. So let us see, what our options are…
Read More

Task Queue for Heavy Weight Tasks in JavaSE Applications

Modern hardware systems have a multi-core architecture. So in contemporary software development concurrency is an even more crucial ingredient than before. But as we will see it is of great importance for single core systems, too. If you have already created a Java Swing application you’ve propably made an acquaintance with the SwingWorker, in order to delegate long running tasks from within Swing events to another thread. But first things first. In Swing the whole painting and event handling of the graphical user interface is executed in one thread the so called event dispatching-thread from AWT the underlying former window toolkit from Java. Therefore most of Swing-based methods are not thread-safe, meaning that you have to prevent race conditions by yourself, but it offers the possibility to dispatch tasks to the event-dispatcher thread. It is highly recommended to do everything, that updates the GUI in the corresponding thread.
Read More

Exception Handling for Injection Interceptor

Remember my post “Circular Injection of Util Classes in EJB 3.0“? There I offered a some kind of ugly solution to the circular dependency problem for managed classes in Java EE 5. In a preceding post (Circular Dependencies of Session Beans) I grumbled about the exception handling in jBoss-5.1. It lets you alone with a meaningless error message and you have to guess what the problem is. Unfortunately my own code presented in the former post is even worse, since it logs problems but ignores them. It wasn’t mentioned for productive use, but it was annoying to me, so here is a little tune up adding exception handling and readable error messages.
Read More

Circumvent Nested Transaction Issues in Tapestry-5.x

Ajax component events may be wrapped in a transaction as I pointed out in “Transaction Handling for Ajax Components in Tapestry-5.x“. But on some occasions an Ajax component event is surrounded by a component event. So the code in the ControllerUtil of article “Transaction Handling in Tapestry5” will lead to ‘transaction already active’ problems, since we try to begin a transaction in the nested ajax component event although there is already an active transaction attached to the current thread. We can overcome this situation by checking, whether an active transaction is present and begin/commit/rollback a new transaction iff not. This behavior is similar to the default transaction attribute REQUIRED in Java EE.
Read More