Wednesday, June 25, 2025

How I passed the AWS Solutions Architect Associate exam

 

How I passed the AWS Solutions Architect Associate exam

The AWS Solutions Architect Associate certification is one of the most sought after cloud certifications. It focuses on design of optimized solutions on the AWS cloud, taking into account costs, resiliency, performance and security.

I started studying for this certification back in 2021, then due to an increased workload at work, I had to put my certification study aside.

I came back to it in April 2025, studied for two months and passed the test in the beginning of June 2025

During the course of my study, I saw that many people have shared their study path and their tips in YouTube videos. So, I decided to share my path with some tips.


Study material

I used ACloudGuru AWS Certified SAA 2020 course, I purchased it a while back and used it when I returned to studying. I must say that this course, while very good in presentation, is outdated in content. It seems that ACloudGuru sold its content to Pluralsight and this course has not been updated the past few years. This meant that I had a significant number of AWS services to catch up on. 

My suggestion is to download the AWS SAA exam guide from AWS:

https://d1.awsstatic.com/onedam/marketing-channels/website/aws/en_US/certification/approved/pdfs/docs-sa-assoc/AWS-Certified-Solutions-Architect-Associate_Exam-Guide.pdf

Go over the services that you need to know for the exam and catch up on the ones that are not covered in your course. You can catch up by reading material from the AWS site or TutorialsDojo: https://tutorialsdojo.com/aws-certified-solutions-architect-associate-saa-c03/

I would also recommend watching some demos on YouTube I found this channel particularly helpful: https://www.youtube.com/@TinyTechnicalTutorials

Another options is to purchase the best selling course on Udemy for AWS SAA, by Stephane Maarek. 

https://www.udemy.com/course/aws-certified-solutions-architect-associate-saa-c03/?couponCode=ST16MT230625A

His course is highly recommended and also up to date.

I also used an AWS gpt on ChatGPT for studying and clarifying some services and their use cases.


Practice Exams

It is very important to take some practice exams before the actual exam, in order to prepare. You should get practice exams that reflect actual AWS exam questions, which are scenario based rather than pure technical. 

The exam questions present you with a scenario of either an on-prem application that you need to migrate to the cloud, a hybrid cloud and on-prem scenario or a cloud application that you need to optimize for either cost, high availability, performance or security.

There are several good sources for practice exams, they are not free but don’t cost much.

I used the TutorialsDojo practice exams which I purchased on Udemy:

https://www.udemy.com/course/aws-certified-solutions-architect-associate-amazon-practice-exams-saa-c03/learn/quiz/4394978#reviews

There are 6 practice tests in this bundle. You can take the tests in either exam mode, with a timer, or in practice mode, without a time limit, where you get the answer after each question. You get a detailed explanation for each question including the relevant links to the AWS documentation. It is important to go over these explanations for the questions that you got wrong. I found the TutorialsDojo practice exams to be harder than the real exam. So I think if you can consistently pass them, you should be able to pass the real exam.

I also used this free resource of practice exams from youtube:

 https://www.youtube.com/watch?v=FSsAff-bqyI&t=10118s&ab_channel=TechWithShapingpixel

Shapingpixel has hundreds of questions, which he goes over and give the answer and a short explanation after each question. The explanations are not as thorough as the TutorialsDojo explanations, but I found that his questions better represented the real exam questions.

There are also other practice exams you can purchase, Stephane Maarek has his own exam bundle and there are also exam bundled from WhizLabs and others.


Exam Tips

The exam has 65 questions, some of which have more than one answer. You should get 130 minutes for the exam, although I got 140, probably because I’m not a native english speaker. This means you have about 2 minutes for each question, you should be able to answer the questions than your know within 30 seconds or less. That should give you enough time for the questions that your need to think about or guess.

You can mark questions for review, then after you finish going over the 65 questions, you get a review screen with the questions that you haven’t answered and the ones you answered but marked for review.  You need to go over these questions one by one, once you go over a question and answer it, it disappears from the review screen. 

If you don’t know the answer and need to make an educated guess, remember the following:

  1. Usually there are two very similar answers, one of them should be the correct one.

  2. The answers always favor a solution that includes an AWS service, preferably a serverless one, rather than a custom solution.



Conclusion

While it is not easy to pass the exam on the first try, it is certainly possible if you put an effort into studying for the exam. I believe that 2-3 months of studying is a reasonable time, even if you are working a full time job.

Remember that even if you are an experienced architect or developer, you still need to study hard. You will have an advantage over a novice, but that will not guarantee your success.

Good Luck



Thursday, May 8, 2025

The shared module microservice antipatten

 

The shared module microservice antipatten

Shared modules are a part of every medium sized software project. The different modules in your application need some common code, components, services, or utilities that are used across the application.

 Instead of duplicating this code in different modules, it's placed in the shared module to promote consistency, reduce redundancy, and simplify maintenance. It has been a common practice in software projects to package the shared module as an artifact and make it available to the application through a package manager. For example, in Java, the share module is written as a different code project, packaged as a jar and then pushed to the artifact repository, then the application can use it by adding a dependency and downloading it through the package manager. The same is done for different other languages. 


With the emergence of microservices, I have witnessed where in some projects, developers implemented the shared module not as a library but as a microservice. This results in other microservices having to call the share module microservice for common functions.



 


This is an antipattern. It introduces tight coupling between services, adds network latency, and creates a new point of failure. It also increases operational overhead with more deployment, monitoring, security, and resiliency requirements.


One common defense is that updating the shared module as a service avoids the need to redeploy all microservices. That sounds reasonable—until you consider the tradeoffs:


  1. In the era of automatic deployments and devops, deployment of several microservices should not be such an overhead. It certainly should not force us to abort existing software engineering practices that have been around for years.

  2. The overhead of deploying additional services does not equal the overhead that is introduced by adding an additional microservice. As mentioned, this presents the additional overhead of deployment, monitoring, security and resiliency. 

  3. When making a change to a shared module, you will likely need to retest your application before deploying, since all modules accessing the shared module will be affected, whether the change breaks a contract or not. This will probably lead to additional fixes and deployments, so the thought that we can just deploy the shared module service without additional overhead is rather naive.

Unless there's a strong architectural reason to expose shared functionality as a service—like cross-cutting concerns that truly need runtime access—it's best to stick with well-established practices: package the shared module as a library artifact.


Wednesday, May 7, 2025

Why Tech Depth Still Matters – Even for Senior Architects

 

There’s a common belief in the industry: “As you move up, you should focus more on strategy and less on code.”

While strategy, communication, and big-picture thinking are crucial for architects, there’s one thing I strongly believe:

Tech depth still matters. A lot.

Here are some of the reasons why:

🔹 Bad abstractions create bad systems. If you don’t understand how databases handle transactions under load or how Kafka deals with failures, you might design a system that looks good on paper—but falls apart in production.

🔹 Developers respect architects who "get it." If you can debug a tricky performance issue or explain why a specific microservices pattern is a bad fit for a project, engineers will listen to you. If you only speak in slides and diagrams, they won’t.

🔹 Tooling moves fast, but fundamentals don’t. Kubernetes, serverless, event-driven systems—these are tools. But deep knowledge of distributed systems, concurrency, and scalability will make you a great architect regardless of the tech stack.

🔹 Hands-on experience keeps you relevant. The best decisions often come from personal experience, not just theory. Even if you don’t code full-time, keeping your hands in the tech—through prototypes, reviewing PRs, or experimenting—gives you an edge.

Yes, architecture is about trade-offs, alignment, and long-term thinking. But without real technical depth, those decisions can be shallow.


Monday, May 5, 2025

How to detect the “service sprawl” anti pattern in your microservices architecture.

The service sprawl is an antipattern in your microservices architecture, in which you have broken down your application into an excessively large number of very small services. While aiming for fine-grained control, it can lead to increased complexity in management, deployment, and communication, potentially outweighing the benefits.





I have seen this happen, especially in large IT organizations, that are trying to adopt microservices. It is very easy for developers and managers to fall into the everything-is-a -microservices pitfall.

So, how can you tell if your application, or company is implementing this anti pattern ?

One of the most obvious “red flags” which indicate that you have a service sprawl, is if you find that some of your developers are responsible for developing and maintaining several microservices at the same time.  Of course, it might be that you have an expert developer, working on several projects at once, but this is a red flag that is worth looking into.

There are two main reasons that this is a red flag for the service sprawl:

  1. If each developer can develop and maintain several microservices at once, it likely means those microservices are probably too thin. While that might be justified, still, it’s a good idea to look into that.

  2. If a single developer is developing several microservices in the application, it can also mean that this application does not suffer from the drawbacks of a monolith and maybe those services could be combined into a single application, without the problems that monoliths are notorious for. 


To avoid service sprawl, teams should regularly evaluate whether their microservices are delivering real separation of concerns and justifying the overhead they introduce. Not every boundary needs to be a service boundary. Sometimes, combining overly thin services into one  can simplify architecture without sacrificing modularity. The goal is not to have more services, but to have the right ones.


Thursday, March 12, 2015

How I improved JPA insert performance by 1300%

JPA has always been considered to be slower than plain JDBC or lighter frameworks like MyBatis. Bulk inserts have been considered the cultpit of JPA.
I recently had to implement a small report archiving for a client (financial organization) . The goal of the project was to archive status reports sent to customers, so that they could be searched for and retrieved.
The input to the archiving system is an XML file containing metadata regarding all reports and one big PDF file that includes all reports. The metadata is parsed and inserted into a database, the database can then be searched for a specific report or a set of reports. For each report there is an index into the large PDF file (using start page and number of pages) that can be used to extract a specific PDF report from the large PDF file.


The project included a batch process to import the data into the archive system as well as a web service front end that supports looking up reports and extracting the report from the large PDF file.


The batch process will need to support archiving about 400k reports in one run. Each report will create a ReportArchive entity in the DB which also contains a one-to-many relationship with an additional metadata entity.
While JPA is not the most efficient technology to use for bulk inserts, it is the standard API used in all Java project by the customer, so it was the default choice.


I used STAX to parse the XML file and IText to extract pages from the large PDF file, with eclipselink as the JPA provider, but what I wanted to talk about was the simple optimizations that helped me improve the JPA insert performance by a magnitude of over 13 (!!).


Initial Run

The initial run has a simple JPA persistence implementation with a commit on each insert of the ReportArchive entity. The main entity the ReportArchive entity had a Cascade.ALL on its child entity.
I had a sample batch file with 5786 reports and it took 7:43 minutes to insert those reports.


Optimization # 1

I decided to perform a commit every 500 inserts in order to improve performance.
I also configured eclipselinke to use JDBC batching using the following configuration settings in persistence.xml:
<property name="eclipselink.jdbc.batch-writing" value="JDBC"/>
<property name="eclipselink.jdbc.batch-writing.size" value="1000"/>


The result: performance improved to 5:09 minutes for inserting the same 5786 reports.
I also tried to change the commit batch size and played with various sizes from 200 to 1000, but at least in my limited testing 500 seemed to be the optimal size.


Optimization #2

The ReportArchive entity had a unique key in report id, but it also had a report sequence which was a database auto increment field (DB2 was the database). I decided to remove report sequence and use the report id as the key since it’s unique in any case.


The result: performance improved to 3:29 minutes for the same 5786 reports.
I also tried to further optimize performance by calling clear() on the entityManager between batch commits, however that did not produce any performance improvements.



Optimization #3

I added caching of sql statements and turned eclipselink’s JPA logging off by adding the following configuration settings to persistenc.xml :
<property name="eclipselink.jdbc.cache-statements" value="true"/>
<property name="eclipselink.logging.level" value="off"/>



The result: performance increased dramatically to 35 seconds (!!!) for the same 5786 reports.



Summary

In order to dramatically improve JPA insert performance if you are using EclipseLink :


  1. Remove database auto increment columns as your primary keys if possible
  2. Perform commits in batches - 500 was the magic number for my application.
  3. Add the following eclipseLink configuration settings:
<property name="eclipselink.jdbc.batch-writing" value="JDBC"/>
<property name="eclipselink.jdbc.batch-writing.size" value="1000"/>
<property name="eclipselink.jdbc.cache-statements" value="true"/>
<property name="eclipselink.logging.level" value="off"/>


Thursday, August 28, 2014

My Experience with ExtGWT

I recently had to develop a GWT application as freelance work for a client.
The application was actually a rewrite of an old application developed with jsp and extJS. To keep the same look and feel I chose to use the extGWT library having read some very good recommendations on it. I developed the application using the MVP pattern which I found pretty straight forward to implement once you get the hang of it.
The extGWT library itself is very nice with many widgets to choose from, however I did find the following drawbacks:
1) css styling is difficult. extGWT comes with several themes, which look very nice compared to bare bones gwt widgets. However their css styling is not well documented. I found myself using firebug to identify which class to change.
2) The  extGWT  library widgets have their own inheritance hierarchy, which means that an extGWT button for example, it totally different than a GWT button, it does not implement HasClickHandlers and does not inherit from ButtonBase or FocusWidget. So, you need to use different event handlers and if you want to expose your widgets through their interface to the Presenter, as in the MVP pattern, you find some problems. For example the addSelectionHandler() method of the extGWT Button is a method of Button and not its interface, so you need to expose Button to your presenter and not its interface.
3) Lack of a community site - the community version of extGWT is hosted in the Sencha web site. Kind of makes you wonder how much Sencha controls the community edition.
This was my first experience developing an application with extGWT and so I ran into several problems which I'd like to share with you. I guess some of these will not be new to experienced extGWT developers, but would still be useful at least to newbies.
So here we go:
1) Expanding a TreePanel - sometimes when you define a TreePanel you will want to have it expanded, rather than collapsed. However, calling the expandAll() method of TreePanel did not seem to work. It seems that you need to call this method only after attaching the LayoutContainer holding the TreePanel to the RootPanel.
2) FormLayout - extGWT has a Panel called FormPanel which uses a layout called FormLayout. This allows alignment of fields in a form. You also don't need to provide labels for the fields but rather set them as part of the field initialization by calling setFieldLabel() on the field. However that does not work unless your panel's layout is FormLayout.
 3) Using FieldSets - if you want to groups form fields together you should use FieldSets. One important thing to remember about fieldsSets is that if you want to control the length of the field or its labels you should do it by invoking the appropriate methods on the FormLayout applied to the FieldSet:
setDefaultWidth - to set the width of the fields in the FieldSet.
setLabelWidth - to set the width of field labels in the FieldSet.
This, as opposed to calling the appropriate method on the FormPanel in case you are not using FieldSets
4) Complex Forms - a FormPanel layouts the fields in a vertical order by default. However, for the application I had to group the fields into several groups and lay them out in a tabular form. See the following screenshot:


To layout this form I used four FieldSets, each with a FormLayout and put them in a FormPanel with a TableLayout.
 5) IE compatibility - Both GWT and extGWT are not compatible with IE9 officially. In order to get your application to work with IE9, use IE8 compatibility mode by adding the following line to you html file:
<meta http-equiv="X-UA-Compatible" content="IE=8" />
6) DatePicker in IE - There seems to be an annoying problem with the DatePicker disappearing in IE before you can make a selection. This problem does not occur consistently, but seems to appear when you use a DateField in something other than a FormLayout. The work around for this is revert to IE7 compatibility although it may present other problems. (see next item).
So again, add this line to your html file:
<meta http-equiv="X-UA-Compatible" content="IE=7" />
7) Disappearing RadioGroup in IE7 - When using IE7 compatibility the DatePicker problem is resolved, but another one soon surfaces: the RadioGroup is disappearing.  The workaround here is to add the individual radio buttons to the container rather than adding the RadioGroup, the radio buttons will still need to be in a group to ensure that only one radio button is selected at a time.
 Hope you find this useful.

My Presentation about designing JEE applications structure

This is a presentation that I gave recently on how to design the JEE applications structure. This is a topic that is often overlooked and misused.
The lecture discussed JEE modules and JEE class loaders briefly. We then discuss how to structure a JEE application consisting of Web modules EJB modules utility Java projects and 3rd party jars. Where should we put the 3rd party jars ? How should we package the Java Modules ? Module visibility between the modules.
The last topic in the talk is about managing the module dependencies in JEE applications. Often dependencies are not designed well or designed based on organizational structure constraints rather then architectural constraints. We discuss the various options we have to deal with dependencies and some tips on how to design dependencies well.
Here is the link to my presentation in slide share:
http://www.slideshare.net/odedns/designing-jee-application-structure