PortuguêsEnglish

Programming

Netlify CMS
2022.03.19

Netlify CMS

I do maintain, for almost 10 years now, a personal journal. A diary. It’s a self psychotherapy. It’s a way to express my thoughts and feelings.

I originally used Google Docs. I created dozens and dozens of files, one for each day. Eventually, I realized that Google was not supposed to be trusted with confidential and personal information. Their spiders crawl and index everything. These thoughts may be still there, even after I delete all the files. Who knows.

Then I migrated to a secondary solution: Wordpress. I hosted a blog and used a add-on to lock it up, allowing just me to see. It’s really good for blogging, with a lot of tools. I designed myself some extra add-ons to manage some aspects of the journal, like a word count and a title generator (based on the post date).

However, maintaining a up-to-date Wordpress installation is critical. Due it’s popularity, and broad usage for e-commerce, Wordpress is a target for many many hackers. I started to think that I could let hacked and let all my stuff exposed. So I decided to export all posts and move once again.

I tried to only maintain it offline, in my computer. It’s, for sure, the most secure way. Anything that is in the internet, even if it’s secured, could be hacked. But sometimes I want to write while away from home. In a trip, for instance.

I looked for a solution that was hosted online, secure (bonus if it was encrypted), and versatile (super bonus if it was open source). I tried some days using SimpleNote then Notion. Notion is very nice and I was using not only to write my journal, but also I started to use it to track some daily routines, like checking weight, sleep time, amount of water that I’ve consumed.

But again I was not very confident about security. So, I’ve exported everything and decided to create host it only in my computer. This time, with a caveat: I was liking the usage of Hugo static site generator, so I designed a blog front end and only enable it locally. And use git to track changes and host at Gitlab. If eventually I’m not in home and want to write, I could find an app to connect to the repository and write. Months passed but I’ve never found a mobile app. So I was locked to just write locally or access the repository using VSCode or whatever.

Finally, the Main Topic

Other day I decided to check Netlify CMS. Created by Netlify hosting service, it tries to create a dynamic admin front end to manage static websites. The result will still be very much static, but the admin section is a single page with a JavaScript that will do everything. It will communicate with a online git repository and commit any new post. The authentication is done thru the git service, Gitlab in my case. It could be Github or any other git service. If the user do not have access to the repository, the admin page will be blank. It reads the source in real time.

Besides that, I could also host the final journal online using Gitlab pages, but settings that only visible to maintainers. The same authentication would be required to see both front end and admin pages. Nice solution.

Netlify CMS is VERY simple. I can only imagine how complex is under the hook, but the final experience for users are simplistic. However, it does the job: I can now access and write my journals from anywhere, including the browser in my phone.

The system relies in a monolithic configuration file that is hosted side by side the content in the git repository. Traversing all the posts from a remote git repository is very slow and not efficient. I cannot imagine dealing with a more complex team structure using it at the same time.

A nice feature is the draft mode: it creates automatically a fork with the draft content. Only when the user click “Release”, it merges the content into the main branch and publish. Netlify CMS does not require Netlify itself, but they are nicely integrated if you decided to use it.

After the successful first experience with my diary, I implemented in my blog. In fact, this very post was written using this pseudo-CMS!

Godot Jam Review
2022.02.10

Godot Jam Review

At the beginning of the year I posted about the retrial of Godot, the most popular free and open source game engine around.

I’ve posted some pros and cons at the time. I then decided to enter into a JAM to motivate myself to try to actually use it for a real complete project. Even it being a jam-like game.

Now it’s time to do a review the whole process.

TLDR: I failed to complete the game. I tried to created a pipeline to build a nightly version for the latest version, with C# support. It is partially running ok.

Bootstraping

The Jam theme was ocean. Bonus points for those that:

  1. all sounds in-game are made with your mouth(s)
  2. include a fishing mini-game
  3. include your favorite quote or pun in game

So I started. As previously said, I’ve planned to implement a old game of mine as the main game. The advantage was that I knew what was needed and the general need. Another plus was the fact that the game was abstract, so I could save a lot of resources and time on the presentation. And by doing the sound effects with the mouth, I could neglect this front until the end.

For the mini game, I looked for a small board game that I could easily implement in the digital form. After some research, I settled with Leaky Boat, a fast paced pen-and-paper game with dice.

So I started to code. But the problems with the C# integration was getting in my nerves. Godot editor crashed more than 30 times in the very first night of coding. It was not blocking the path, but it was making it very very difficult.

New Version from Scratch

As a potential solution, I checked if the undergoing development of Godot 4 (I was using the “stable” version of Godot 3.5) had any nightly build available. I’ve found a guy that was creating theses nightly builds! But only the original non-C# version. The repository was open so I checked if anything was possible to salvage. Not much.

So, as a detour, I decided to build a pipeline on Gitlab that would compile the source code and build it. Eventually I would schedule it to run every night. However, the process of creating a build pipeline online is very tedious and laborious: on every change, I had to run in online. In the case of Godot, trigger the code compilation to eventually discover that 30 min the build start, it failed due some dependency on the build stack was not fun. It took me a whole day spending my personal CPU quota doing this.

So, as a second detour, I decided to host a local Gitlab instance in my computer. It would allow me to develop the pipeline itself. Once ok, I would migrate back to the online service. It took me 2 days to set this. I first decided to go with local Kubernetes, but it was getting too complicated. Then I migrated to a solution that I am more familiar with: Docker-compose inside a virtual machine. I created inside Virtualbox (instead KVM) because I planned to reuse it when I decided to use Windows.

Downloading and building several docker images takes a lot of space! I had to resize the VMs to a much bigger size than originally planned to accommodate the dozen images created/downloaded.

The plan was to create an helper image with all the tools needed to compile the code with or without C#, host/register inside Gitlab itself and reuse in the main pipeline. This step was working fine, but the actual build was failing time after time.

To check if the steps were right, I decided to compile inside my own machine. I did not wanted this originally to not pollute my pc. But worked. Since I “wasted” several days in this detour, I decided to use this local compilation into my project again.

New Version, Old Problems

Godot 4 renamed several classes. Also, it changed several small things internally and it took me couple of ours migrating to the new environment. Good thing is that I did not have much to convert. Done. And the game was working the same as before.

Now it was time to continue the development. But the problems continue the same way: the editor was crashing time after time. I managed to make both the game and the mini game functional, but with several restrictions. The pace was slow, because I had to investigate the way of doing things all the time. And the documentation was definitively not comprehensive for C# users.

After 5 days, I gave up. :( I could theoretically finish the game in a certain state, but I decided to focus my attention to other projects instead. I might try to go with this engine later in the future, but for now, I will return to Unity until I finish one of my projects.

A couple of days after the end of the jam, Godot 4 alpha 1 was released. I still think that, if the devs do not provide a nightly version by themselves, my project has some space.


Despite the failure, I’ve learned a lot about Godot, Gitlab and Kubernetes. Specially the later two. I will use it in the future for sure, so I do not feel the pressure of failure.

All the code, even incomplete, are open source in my Gitlab profile.

Also, they are organizing a Jam every month. I can reuse all to the new jam, for certain.

Trying Godot Engine Again
2022.01.13

Trying Godot Engine Again

It’s about 10 years I discovered Unity and felt in love. The editor was great but I really liked programming in C#. I allowed me to both organize and creative.

Despite being the among the top 2 suites in the world, I’m increasingly annoyed by them. It became a huge spyware, heavy and the full of annoyances. Beside being super expensive (for Brazilian standards), the pricing model is much less indie-friendly than it’s nemesis, Epic’s Unreal Engine. Users pay upfront instead paying royalties of their own success.

Time to explore new grounds! In fact, I try new stuff all the time. It’s time to land in new grounds! Some criteria to consider:

  • Open source preferred, almost required.
  • Avoiding C++ (because my games would leak memory of certain). Javascript is discarded due performance. Rust is hot, but an engine supporting it is probably super beta.
  • Small footprint if possible.
  • Pro developer tools, like CI/CD headless compilation.
  • Big community or organization supporting it. The lack of big support is an abandoned project wannabe.

So for the past months I tried to play with several options. Notably:

  • Unreal is unbearably gigantic (7gb+), which hits specially hard on CI/CD. And Linux editor is buggy.
  • I was excited by Stride/Xenko, but months after the open source, it was basically abandoned.
  • Godot have that annoying scripting language embedded, but the no-go was the lack of a equivalent of ScriptableObject to create data assets.
  • O3DE is a possibility for the future. Lua as scripting language is a personal nostalgia.

Spark of hope

Then I read an article about creating data assets in Godot. It used C#. It was not a trick or complex. Pretty straight forward. I decided to try it again. Less then 100 mb later, with no need to install or register, I started my -again- first project. The goal was to load data from a asset created using C# code, just like a ScriptableObject in Unity. The test was a success.

So it’s time to try to create a full prototype game! I’m planning to joining one of the several jams they organize to motivate myself to finish. No prizes involved, just challenge. Things to explore in order to be conformable with:

  • Client-server multiplayer.
  • Scene streaming.
  • Animations.

Another idea is to recreate an old game of mine: PICubic. It was not commercially released, so it might be a good way to learn and expect results.

Some general thoughts

After a week that I’m playing with it. Some thoughts:

Cons

👎 The design principal that each node have only one script attached instead the super common component-driven approach lacks. Specially trying to design very complex systems using small parts, like the micro-services in the web development. I heard once there are a spin-off that implements this, but there is no traction in the community.

👎 C# integration is still not good. At least in my computer, the editor crashes each 30 min on a random time I hit play. Also, the editor do not display custom C# classes in the inspector. I design several vanilla classes to organize the code, but I had to transform them into Resources to be able to edit their data.

👎 Linking assets in the editor does not respect the class restriction. One could insert a Player asset instead Weapon and the editor will not complain. I have to check before using a external variable every time.

Neutral

😐 Refereeing nodes in the hierarchy and in the asset folder are two distinct things. Nodes in the hierarchy are accessed by NodePath while prefabs (here called PackedScenes) have a different type.

😐 GDScript: focusing on a custom language instead a vanilla widespread like C# or C++ is a waste of both newbies and Godot’s own developers energy.

Pros

👍 The everything is a scene approach fascinates me. I always thought this way in Unity: scenes are just a special prefab.

👍 Creating an automatic build pipeline on Gitlab was a breeze. Due the smaller container and less complexity, it takes less then 2 minutes to create a build on any platform. A empty Unity project takes this time just to download the 4gb+ image and at least 5 more minutes to compile.

The project development is somewhat slow for my taste, but they are receiving more and more financial support in the last months that might enable them to accelerate the pace. I’m specially interested in the new external language integration for the upcoming Godot 4.

GitOps Lifestyle Conversion
2021.11.11

GitOps Lifestyle Conversion

I’m currently fascinated with Gitlab’s handbooks. I heard of companies trying to be more open to the public, but the extent that Gitlab is doing is unprecedented. They are documenting everything publicly. Most, if not all, internal processes are getting written for everyone to see.

  • How admissions are done? It’s there.
  • How and when employees are bonuses? It’s there too.
  • What is the ERP used? It’s there.
  • In fact, what is the whole list of external software and service used? It’s there too.
  • The scripts used to manage it’s own site? It’s there too.
  • Personal information, like employees actual salaries? Of course, are not there.

Too much information? Maybe. But it’s definitively inspiring.

Another source of personal inspiration comes from a guy on Twitter: Keijiro Takahashi. This japanese programmer does several mini-tools for himself but publishes everything on Github with minimalist’s licenses like MIT.

In contrast, I was checking my LinkedIn the other day then I decided to share my Gitlab and Github accounts. There are so many projects over there. #ButNot. Most, almost all, were private! Many game prototypes, small side projects. All locked. Some are basically live backups, since are not updated for ages. So I decided to do two things:

  1. Open some of the closed projects
  2. Git-fy some of my personal and professional projects
  3. Documentation as code for my new company

The first is pretty straight. Mostly checking a box. Sometimes adding a small README or LICENSE files. Few times making real changes.

The second is a new mindset: I have dozens of small projects, from games to personal scripts, that I’ve never used git to track changes. But not only I could get better control of it, but also I could share with the world. You will see more and more projects popping up in my Gitlab account page.

The third, follow partially Gitlab’s way. I’m considering in documenting most of the processes in git-like wikis. It will not only good sharing the knowledge with other employees and partners. It’s also good for tracking the business decisions that changed these processes. A rather clever approach.

Hugo Images Processing
2021.08.11

Hugo Images Processing

Hugo static website creating is a fantastic tool and I told you before. Since I changed to it, I’m very confident that the site is fast and responsive.

However, my site is packed full of images. Some are personal. Some are really big. Some are PNGs and some are JPGs. I created a gallery component just to handle posts that I want to fill with dozens of them.

Managing posts images is a boring task. For every post, I have to check:

  • Dimension
  • Compression
  • EXIF metadata
  • Naming

Dimension

![Hugo Images Processing size 2.jpg](Hugo Images Processing size 2.jpg)

Having a bigger image than the size of the screen is useless. It’s a bigger file to download, consuming bandwidth from both the user and from the server. Google Lighthouse and other site metric evaluators all recommend to resize the images to at most the screen size.

In Hugo, since I defined the it’s easily automated using some functions:

{{ $image_new := ($image.Resize (printf "%dx" $width)) }}

Compression

Loss compression comparison.png

My personal photos are, most of the time, taken in JPEG. Recently I changed the default compression to HEIC for my phone camera, that provides better compression to hi-resolution photos. The web, however, does not allow such format.

Some pictures used to illustrate the posts are PNG. They have better quality at the expense of being larger. Mostly only illustrations and images with texts are worth to have a lossless format.

Whatever the format, I would like to compress as much as possible to waste less bandwidth. I’m currently inclined to use WebP, because it can really shrink the final size to a considerable amount.

{{ $image_new := ($image.Resize (printf "%dx webp" $width)) }}

EXIF metadata

Each digital image have a lot, and a mean A LOT, of metadata embedded inside the file. Day and time when it was taken, camera type, phone name, even longitude and latitude might also be included by camera app. They all reveal personal information that was supposed to be hidden.

In order to share them in the open public internet, it is important to sanitize all images, stripping then all this information. Hugo do not carry these info along when it generates new images. So, for all images get a minimal resize, this matter is handled by default.

Naming

I would like to have a well organized image library, and it would be nice to standardize the file names. Using the post title to rename all images would be great, even more if used some caption of user provided description.

However, Hugo does not allow renaming them. To make matters even worse, it appends to each file name a hash code. A simple picture.jpeg suddenly became picture-hue44e96c7fa2d94b6016ae73992e56fa6-80532-850x0-resize-q75-h2_box.webp.

A incomprehensible mess. If you know a better way, let me know.

So What?

So, if most of the routines can be automated, that’s the problem?

The main problem is that Hugo have to pre-process ALL images upfront. As mentioned in the previous post, it can take a considerable amount of time. Specially if converted to a demanding format to compute such as WebP.

Netlify is constantly reaching the time limit to build the site, all because the thousands of image compressions. I am planning to revert some commits that I implemented WebP and rewrite them little by little, allowing Netlify to build a version an cache the results.

There are some categories of images:

  • gallery full-size images: there are hundreds of them, it would take a lot of the processing time but I will have the metadata extracted from the originals. The advantage is that they are rarely clicked and served.
  • gallery thumbnails: the actual images that are shown on gallery mode. They are accountable of the biggest chunk of the main page overall size when a gallery is in the top 10 latest posts.
  • post images: images that illustrate each article. They are resized to fit the whole page, so when compressed they represent a nice saving.
  • post top banner: some posts have a top image. They are cropped to fit a banner-like size, so they are generally not that big.

I will, in the next couple of hours, try to implement the webp code on each of these groups. If successfully completed, it will save hundreds of megabytes in the build.

Bonus Tip

Hugo copy all resources (images, pdfs, audio, txt, etc) from the content folder to the final public/ build. Even if you only use the resized ones. Not only the build becomes larger, but the images that you wanted to hide the metadata is still online, there. Even if not directly pointed in the HTML.

A tip for those that are working with Hugo with a lot of images processed: use the following code into the content front-matter to instruct Hugo to not include these unused resources in the final build.

cascade:
  _build:
    publishResources: false

Let’s build.

Edit on 2021-08-25

I discovered that Netlify has a plugin ecosystem. And one of the plugins available is a Hugo caching system. It would speed up drastically the build times, as well the possibility of converting to Webp all images once and for all. I will test this feature right now and post the results later.

Edit on 2021-09-13

The plugin worked! I had to implement it using file configuration instead the easy one-click button. Building time went from 25 minutes to just 2. The current cache size is about 3.7 GB, so totally understandable.

It will allow me to must more frequent updates. Ok, to be frank: it will not restrict the posting frequency. However, patient, inspiration and focus are still the main constrains on blogging.

On netlify.toml file on root, I added:

# Hugo cache resources plugin
# https://github.com/cdeleeuwe/netlify-plugin-hugo-cache-resources#readme
[[plugins]]
package = "netlify-plugin-hugo-cache-resources"

[plugins.inputs]
# If it should show more verbose logs (optional, default = true)
debug = true
Project Curva
2021.03.14

Project Curva

For the last 9 years, I’m working as a planner and controller of multinational Brazilian oil company. The team consolidate all the planning information of all the company, analyses it and reports to the company board of directors.

For all these years, I’ve struggled to deal with some basic business scenarios:

  • At the very end of the process, someone in the chain of information submits a last-minute update that cannot be ignored
  • The board decides to change the plan
  • Existence of multiple simultaneous plans, for optimistic and pessimistic scenarios
  • Changes in the organizational structure

The current information systems used or developed by the company are simply too restrictive to accommodate their business cases. The general solution is to create entire systems using dozens of spreadsheets. This a patchwork of data, susceptible of data loss and zero control.

To address this, I decided to develop myself a system that is both flexible and powerful. The overall core propositions are:

  • Versioning: instead overwriting data whenever there is a change request, the system should be able to preserve the existing data and generate another version. Both should be accessible, in other to allow comparison and auditing.
  • Branching: not only sequential versioning (v1, v2, v3), it should allow users to create multiple current versions. Creating scenarios of event temporary exercises should be effortless.
  • Multiple dimensions: for each unit (ie, a project in a list of projects), the user could insert the future CAPEX, OPEX, production, average cost, number of workers or any arbitrary dimension. It’s all about capturing future series of values, regardless the meaning.
  • Multiple Teams: in the same organization, users can create inner teams that deal with different aspects of the business. The system should allow to users set the list of units to control (projects, employees, buildings, or whatever), their dimensions of measurement and then control the user access of all this information. It’s a decentralized way to create plans.
  • Spreadsheet as first-class citizen: small companies might not use them much. But any mid-to-big companies use spreadsheets for everything. Importing and exporting system data as Excel/LibreOffice/Google Docs is a must.

With this feature set in mind, I started to create a spear time what is now temporally called Project Curva for the last 3 months. I will post more about it in the future: the used technology, the technical challenges and some lessons learned.

A beta is due to the end of April, 2021.

Update 2021-10-18

The project is called NiwPlan and can be checked on NiwPlan.com.

Hugo
2020.12.03

Hugo

Hello World. Testing the new site!

For the N’s time, I migrated the blog to a new blogging system. This time, I’m using Hugo.

Hugo is a class of CMS’s that generate static sites. Just like compiled and interpreted programming languages, the whole site generated beforehand and the result is uploaded to a server.

The main advantage using this method is a substantially faster site and zero attack surface form the CMS. The main disadvantages are the less user-friendly interface and big building times.

Let’s dig into theses issues:

Faster Experience

Since all the pages are now static and pre-made, the only variable it the server latency to delivery the files. The page does not need to be built on the fly for each user, which can be tremendously slow. And it also waste CPU from the server, rebuilding it time after time after time.

Most CMS’s have some caching system to mitigate this issue. They first check if the page have been already built. If so, serve it. If not, build it and save the result. The problem lies on implementing a CDN and/or a technique to invalidate the cache to force a rebuild (in case the content was altered by the author).

Build time.png

More Secure

Since it does not compile the page on the fly, it eliminate the security issues inherited form the language. It also does not access any type of database. There is no admin page. Event DOS attacks can be much more robust, since the CDN can migrate the traffic to another server easily.

User Interface (Lack of)

Well, Hugo uses the developer-driven approach that requires the user to use a IDE and compile the whole site. It does not offer any type of interface in which you can drag and drop widgets. It’s is definitively not WYSIWYG.

If you are seasoned to programming tools, you will have not much problem. It will be very familiar. For a non-tech savvy mom blogger, Hugo is a no go.

Build Times

Even to see a single post that you just wrote will take time. Like compiled programming languages, the site have to be built before you can check on it. Hugo have a automatic service that propagate the incremental changes and it really fast, so iterating the content will not slow you down.

It will take even more time if you have some extra processes implemented, like resizing images.

But the process to rebuild the entire site might take a while. Thankfully, for the production the whole building process can be delegated to CI/CD tools. Using GitHub or Gitlab, they will automatically build the site on each commit.

The process of writing this post, the very first on the new platform, was quite nice. But I’m in the perfect spot of product requirements and technical skills


Anyway, I’m going to try to post more content in the following months. :)

Best static site generator 2020

CGD: Awesome Video Game Data 2017
2017.10.25

CGD: Awesome Video Game Data 2017

I follow the GDC (Game Develeoper Conference) channel on Youtube and, just right now, I totally recommend you to do the same. Great amount of excellent talks (of course there are some exceptions, like the lame at-the-time-GDC-board-member Peter Molyneux making plain simple propaganda).

There is one that I just watched and is very eye opening: it the annual talk from the guys of EEDAR (a data consolidation company) presenting numbers of the whole industry. The talks about prices, sales, regions, mobile/pc/consoles. Everything!

It is a must-see.

Linux on Notebook, Take 2, Mini-Buntu
2017.07.13

Linux on Notebook, Take 2, Mini-Buntu

My notebook it not new. I bought the Yoga 2 Pro almost 4 years ago. Two years back, I got annoyed with Windows so I decided to install Linux in it. I was scared because on the contrary of most my PCs that I assembled myself, the Lenovo had a warranty and possibly custom hardware.

As I told, the attempt failed. It was giving me too much headaches. Also I generally use my notebook to also program and develop games. And because the Unity Editor was not available (not at least in a reasonable version), I was kinda forced to migrate back to Windows10.

Linux 3.jpg


About 3 months ago, I decided to give it a second shot. In case I was not clear, I use Linux in the desktop, in a dual boot, for about 15 years. I saw Ubuntu entering the market. But since I start to systematically be involved on making games, the necessity of Windows started too. Back to the experiment. It was a requirement for me that the general performance had to be great. Not good, great. I would prefer to keep on the Debian-like distro because I’m familiar to. Ubuntu family if possible. So I selected both Kubuntu and Lubuntu for a ride.

Kubuntu was the one that I tested before. I like KDE since version 2 but again failed in deliver a blazing fast experience. In the notebook, the boot time was several minutes. Even Windows 10 was couple of seconds. I decided then to format and install Lubuntu.

Lubuntu is a Ubuntu derivative using the LXDE desktop environment. Super light. Man! Boot was fast and when ready it consumed a fraction of RAM of both Windows and Kubuntu. However, during my 4 weeks test I was giving too much little problems. So I decided to make another switch.

Xubuntu, in a similx1800 ), which is fine in a 13 inches monitor. Then came to the software selection. Lubuntu was super short on preinstalled stuff, which i like because I generally don’t use them anyway, but Xubuntu came with some. The good news is that the selection does not consume much of the the drive space and are light enough in case I really want to use them.

Linux 2.jpg

I had to install Steam and it works nice. Unfortunately, GOG’s Galaxy does not have currently a Linux version, so the games have to be installed manually one by one. Also your play time will be not computed nor you will be alerted about updates. A second negative point is that most GOG’s games do not use the new cloud save feature, so playing a bit in the notebook and a bit in the desktop is only for games that progress do not matter. Fingers crossed for the future.

Linux 4.jpg

Finally I was looking for a game engine that works on Linux. Unreal, as I found, works, but you have to compile it yourself. GREAT 🙁 I did it. It took hours and the result was too many crashes and too big suite to work in a notebook. I was once again looking for a lightweight engine. I tested Godot and liked. But it is still lacking.

Then I found out that Unity is in fact releasing in a alternative channel (thru forums) the update engine for Linux. I installed it too. crashes a log but it works. I`ve being playing the game developer in the note book ever since. With the excellent Visual Studio Code editor, it makes my days fun.


After 2 months and half working most of the time on this notebook, I can be happier man but in general I am already one. It is fast, close environment that I face when I deal with cloud Internet stuff and free. I plan to migrate to a newer machine in the next year, mostly to get a better amount of RAM memory and battery life. Currently, it lasts 3 hours, which is by any means a shame for a mobile device.

This is currenly my desktop
This is currenly my desktop

A Study in Transparency: How Board Games Matter
2016.02.23

A Study in Transparency: How Board Games Matter

I just watched a GDC presentation by the same name by the developer Soren Johnson, from Mohawk Games. I’ve agreed almost entirely with him. The basic premise of his presentation is that video games should pay more attention to physical board games, learning that techniques they use in order to create engagement. The motif is: board games have transparent set of rules and transparent implementation of luck. Video games should have such transparency too to engage players.

At the end, when he opened for audience questions, he was nervous to answer and he somewhat backed a bit from this point of view. There was a couple of questions that I want to discuss:

What if the game system is soo complex that you deliberately want to hide it from the player? (watch the original answer)

In Civilization, as pointed in the presentation, the designers opted for displaying each variable or modifier as a series of bullet points in the UI. That is because the list of modifiers is long and complex. When engaging in a diplomatic mission, the player must understand what are affecting the relationship. But hey, it is only one way to solve the problem.

In Shadow of Mordor, the orc leaders challenge themselves for power and status. Each orc also have a list of strengths and weaknesses. All this information is presented to the player is a very elegant way. It exemplifies the Soren’s argument.

But if game is so complex that is really difficult/impossible to present the players all information? Well, it is probably a flaw in the game. If there is too much going on, most likely that the player action only impact slightly in the result. The player will fell that is pure luck. He is just a passenger. It is the game designer’s job to balance it back; otherwise, it will suffer from bad reputation and bad sales. Too shallow or too complex have to be considered equally problems to deal.

Notice that another possible consequence is when the game becomes a cult hit and the players that endured the gameplay formed a community to share information and demystify the obscure rules. A good example is Dwarven Fortress, a super weird and complex game that is loved by many for being weird and complex. My suggestion: do not try this path.

If you expose the whole set of rules and internal numbers, it will become a matter of optimization instead experimentation. (watch the original answer)

It can be a problem, yes. Tic Tac Toe suffers exactly from this problem: you can anticipate the whole match to a point that you CAN guarantee that you will never lose (you cannot guarantee that you will tough).

But as a designer, you can implement counter measures to fight it. Luck and complex decision tree for example.

Luck is the classic solution. By implementing a series of unknown events, it makes very difficult to predict the future. Random numbers, random events, scramble cards. Notice that luck is merely an element that the one cannot control or predict, like weather or a dice roll, or a hidden enemy in a fog of war.

Complex decision tree refers to both make several factors relevant for each decision and a game with several rounds. Think of Chess or Go. There are so many possible movements per round that, while theoretically possible, it is practically impossible to compute all moves in order to make a single best decision.


In general, I am with Soren. I might discourse about it in the future, because most people think that creating games is just a intuition and art. But there a lot of reasoning and logical decisions that should guide the construction of such products.

BRUNO MASSA