Human Talk Paris – September

Last week, Theodo was at the Human Talk Meetup.

It’s a small afterwork event where developpers speak without restrictions about any topic during 4 sessions of 10 minutes each. Each talk was really really interesting and the subjects were quite diverse. Therefore I will dedicate one paragraph to each talk of the evening.

human-talk-paris

Object Oriented Programming: historical error or path to follow?

This session, presented by Frédéric Fadel, was of particular interest for me, for it smartly challenged our standardized vision of programming. Why do we use OOP? Mainly for historical reasons, but there are lots of things that we do for historical reasons, that are now standards, but are we sure it is the right choice? Democracy, Capitalism, Monogamy, Obligation of School… who knows if other standards will emerge tomorrow?

Therefore he questioned the pertinence of 3 fundamentals of OOP: encapsulation, polymorphism and heritage. Whereas he totally agrees with the first concept and find that the second can be useful too, he mainly criticizes the last one. Because of the true nature of information, completely virtual, it is by essence very difficult to model into labeled boxes and fixed objects. His speech is therefore a curiosity incentive to make you want to learn more about Aspect Oriented Programming.

As I do not intend to rephrase the talk here, if you want to know more, I invite you to watch the full video of the presentation.

Presentation of Scrapoxy

This session was less philosophical and more pragmatic as it went straight into the methods of bypassing web scrapers limitations. After a short introduction about what is at stake (getting information and resell it :p), Fabien Vauchelles enumerated the 3 main restrictions companies put in place to limit the scrapping.

The first limitation is simple blacklisting based on IP and hits per minute. The second one is advanced blacklisting based on more complex detection techniques (User-Agents or even user behavior: how does a human user use the keyboard, the mouse…). The third one often comes into play when a strange behavior is detected: the website asks the user to confirm that he is human with a captcha.

He then reasoned step by step on how to bypass the first limitation:

  • Step 1: you use a proxy to hide your IP. When it is blacklisted, you can restart manually the proxy to scrap the website with a new IP.
  • Step 2: you use many proxies with many IPs to gain more time before you get detected
  • Step 3: you use a proxy manager that will manage a pool of proxies. Detect automatically which ones have been blacklisted, exclude them of the pool and start a new one, to keep the number of proxies constant.

Which is the exact behavior of Scrapoxy.

Of course, the last question of the public was about the morality of web scrapping… Well, you can still visit their website or watch the video if you want to learn more!

scrapoxy

Why do we fix bugs?

The speaker, Michal Švácha, was truly enthusiast and inspiring during this recreative presentation. He fitted perfectly in between the more technical presentation with a lot of humor. After an epic bug resolution description, he asked himself this simple question: “why do we fix bugs?”.

He ran through every possible reason:

  • Is it for our brain to feel better?
  • To please our product manager?
  • For the end-user at world’s end?
  • For the pleasure to tick one more bug in our todo-list?

He finally ran into the conclusion that we are human after all and that we are mainly doing it to gain experience, to be better programmers, for our thriving thirst of personal progress and accomplishment. Actually fixing bugs is not about the destination, it is about the journey.

If you want the whole show, here is a link to the online video.

ReactJS in production

The last presentation, by Clément Dubois was about ReactJS, used on the Chilean website of Club Med. The main reason behind this bold technology choice was the need of SEO on a single page application. Indeed, single page apps are mainly blanks for search engines because they build the DOM dynamically depending on AJAX sub-requests results. With ReactJS, you can configure your server to send first a static and full version of the page, understandable by search engines. The JavaScript then makes it dynamic, as a real web application. He went through the basics of ReactJS (the components, the state and the render function), including code snippets. He also explained the necessity in their process to begin with finding the components that you will need to create in a page and see which generic version you can write for later reuse.

If you are interested, I invite you to learn more about ReactJS in their really useful documentation, or on the video of the presentation.

React.js_logo.svg

Well, if you’re interested in these topics, it would be a pleasure to meet you in one of the nexts Human Talk meetups! See you on October 13?


You liked this article? You'd probably be a good match for our ever-growing tech team at Theodo.

Join Us