FPGAs for the average engineer

October 4, 2016

You know Raspberry Pi, Arduino, Throw them away, these are toys for children. System on Chips are for adults.

Ishtar Plasmaantenne

FPGAs to power a plasma source

I am kidding, I love the Raspberry Pi, but what the SoC offer opens a new dimension. Let’s take the example of the Red Pitaya: its two main features: A zynQ 700 which combines a FPGA with a double core CPU and 2ADC-2DCA at 125 megasamples per second which make you possible to receive and emit signal in the MHz range. For a price of around 300 euros. This means first that you have a fast digitizer available and you can play in the radio range. And second, this digitizer is connected to a FPGA so that you can do processing operations like FFT, filtering and so on at MHz speed! This is really a great tool, not only to learn but also for practical applications. I use it for instance to generate a signal, which is amplified by a 1kW amplifier and injected in a plasma to investigate the propagation of waves in it. This is super easy to code in C and python, you can use the GPIO to get trigger signals or activate other systems, you can integrate easily in a global control system. I use it as well to measure high frequency instabilities in a tokamak plasma with a realtime FFT to reduce the required amount of data to store.

In standard, it comes with a free version of Vivado (i.e. missing all high level features but fine you do not really need them). The most difficult part is to install it and to support its xxx GB of required space. The program itself is not buggy (at least, not at the level I use it) and you can really learn how to code hardware in Verilog or VHDL: this is rather exciting when you understand how it works and that you start to see gates and flip-flops through the code.

The big advantage of the Redpitaya is that it is open source. Vivid and Xilinx provide also a lot of documentation. So, when a problem occurs (which happens every two minutes at the beginning), you have resources to find the solution rather easily. I would like to give here the most interesting links where to learn about the hardware:

  • Red Pitaya Notes by P. Demin: this is the big big big reference. There is a bunch of interesting projects, with SDR, even nuclear magnetic resonance and a clean, version-controlled method to manage the FPGA code and the associated linux ecosystem.
  • Vivado’s youtube tutorial by M. Sadri: everything about Vivado from simulation to debugging. It takes time to get through them but this is no loss of time
  • Building Linux on Zynq: this basically teaches you how to install an embedded Linux and the roles of the different components from the boot to the shell.

Beyond that, you can start to have very interesting stuff: building bare metal applications which do not require an OS, you can try to use Rust to gain in security and develop your own flexible and optimized PLC that suits your needs and not the bank account of big instrumentation companies.

Stairway to Heaven

September 30, 2016

This was this week’s hype in the aerospace industry: Elon Musk presented his vision for reaching Mars and beyond. There are countless analyses and reviews of his presentation (here for instance for a technical one) weighting either in favor or against Musk.


The ITS on Europe. Credit: SpaceX

I got several times the question by friends, colleagues about what I , as an ex space propulsion engineer, was thinking about the feasibility of this vision. The bare answer is: I don’t know. I mean, there is not enough information in this presentation to evaluate the feasibility of the Interplanetary Transport System (ITS). I wonder how pundits can get an opinion on that. There has been countless Mars mission design proposals in the past. This one is not really different. It is both credible and far-fetched because written on the same model: you assess the requirements (in terms of costs, mission duration, target), you take the existing technology (to be credible) and you extrapolate it to meet the requirements (and it may look far-fetched or not, whether you are part of the proposal team or not). So basically, here, SpaceX develops the cost model to have almost routine trips to Mars (very cheap but it is a target – it makes sense to have something cheap if you want to “democratize” space); it takes the existing technology, a bit improved (the Raptor), the reusable launcher (complete reusability instead of only first stage) and it extrapolates the system (increase the number of engines, huge composite tanks,…) to be able to have a cheap transport. This is what was presented. There is no new concept, no really new technology.

So, how can you assess the feasibility of the mission? You cannot because there are missing data on the critical part: the execution. And in the space industry, the execution is the key from failure to success. What methods do they want to apply? How do they want to adapt their organization, their team, to meet the challenges? What new tools will they use to transfer this concept in reality?

If you think of it, SpaceX has not invented new technologies or radically new concepts of missions. They have taken existing ideas that other private companies have also taken (vertical landing – McDonnell and Blue Origin, space capsule with Orbital). I assume that NASA played an important role for the transfer of technology towards private company and that they didn’t need a huge effort of research and development. But what Musk did and this is a huge change, was to set up a modern organization managing both the system and the underlying technologies (propulsion, GNC, actuators), something that the Big Players like Boeing or EADS didn’t bother to do because technology is low-level. Adding to that modern IT tools to automate the manufacturing and production, it was possible for a relatively small team to develop and optimize in a very efficient way the construction of a new, partly reusable launcher and the associated space capsule. In the case of the Mars mission, there is no indication of what they will do in terms of organization, of how they will scale their methods to accomplish this challenge. For instance, they showed this big composite tank. Nice but how did they build it? The difficulty is to create an industrial robot which is able to loom that for big series while respecting the tolerances required. No word about that. Yet, this is where the feasibility of the project can be assessed. But this is also the heart of SpaceX. I understand that Musk does not want to reveals his trumps.

So, what about this presentation? What is the purpose of it if it is not to present the technical details of the project? In my opinion, there are two goals, one external, one internal.

Externally, you have to create the proper spirit for this kind of expensive endeavor. So this is a classical strategy when you want to sell a project where you know in advance that people are not convinced or concerned: you show far in advanced the most  advanced and incredible version of your project; the first time, people will say he is crazy; the second time, they will say no, the third time: “mmmm”, the fourth time: “why not…” and so on until they completely change their mind and say: “let’s go” and sign the check. People need time to get used to a crazy idea. Very probably, you will not get what you asked for at the beginning, but a limited version which will correspond to what you actually wanted. This is a very effective long-term strategy to fund new experiments. I can completely imagine that it is what Musk wanted to do. People will start to think and rethink and rethink. When the negotiations for the funding will arrive, the ground will be ready and people will be used to the idea. Probably, creating a new civilization on Mars is not really his ultimate dream (on Mars really? why not in Siberia? Or in North Dakota – I am kidding I love North Dakota). If he manages to get a first crew there under the flag of SpaceX, he will have written his mark in the sand of history.  Anyway, his rhetoric must revolve around the idea of colonization and not of exploration to avoid the major counter-argument of manned spaceflights: the robots! If he wants to send people to explore, his opponents will want to send probes which are probably more efficient for this work. But if he wants to create an interplanetary species, there is nothing to oppose: you touch the heart of mankind as a group of settlers.

Internally, the goal is easier to understand: to create the right spirit at work. You do not work on a rocket that sends communication satellites for whichever investment fund. You are working on an interplanetary crewed spaceship. This makes a huge difference. You are part of the conquest of space. In these conditions you can work 24/24 8 days a week.

To conclude, the presentation makes sense in terms of communication strategy, less in terms of feasibility of the concept. If you are not an insider, you have to believe or not. As an outsider, I believe my instincts and my centers of interest: I find chemical propulsion a bit boring🙂 I admire these massive and loud engines like these old steam locomotives; they are jewels of engineering. But I am more attracted by electric system and other more exotic phenomena. I believe (! I have no way to demonstrate it yet) that there is a huge amount of energy to tap in and that the proper way to engineer them still has to be found. In addition, with cheaper and cheaper earth to orbit transports, it becomes to test riskier technologies. This will be a funny time!


Look to windward

September 27, 2016

I have always been fascinated by the title of this novel by Iain M. Banks, even though I have never really understood the true meaning of it in the story. Whatever, I have this expression in mind now that I am trying to build a team for a project of mine.

And the wind you feel when you are selling your project to potential teammates is not a light breeze grazing the hair, it is a violent hurricane breaking each part your body. Looking to windward is painful.

The project is, in my opinion, not bad: there is a good idea, a potential market and possible long-term developments. And I kept its objectives reasonable and achievable with a modest amount of funding to start with. The technical challenge of developing the software is limited as well.  Thus, it is a nice medium-sized project with a vision, a well-formed pitch, technical feasibility and potential to reach a market.

Yet, what hell it is to find people ready to participate to it. I did not ask for 100%, no, it is not necessary. It can be a side-project at the beginning. But, I get during the discussions all the risks and possible imaginable failures, I am explained the hard competition, the difficulty to get funded, the bugs, the security leaks and other trouble a code can offer. I do not even imagine the reaction if I would propose to start a new SpaceX🙂

It is incredible how people can be pessimistic. Is it because they care of you and want to prevent you from suffering? Is it animal instinct to escape the danger? I do not know, maybe a mix of these. No wonder that successful entrepreneurs deploy a reality distortion field: it is the only way to deal with the surrounding negativity.

The positive aspect is that you learn to polish the presentation of your project and to improve the counter-arguments. The negative aspect is that I still have not found a soul to share this project with.

The art of science communication

September 20, 2016

If only science was a game only between you and the nature! Alas, it is not simple, our environment is far too complicated to be understood by an individual. Even if the lonely genius Einstein myth persists, the reality is that science, whatever its domain of application, is an endeavor at the scale of humanity. A problem can be address only through cooperation, discussions, disputes. Consequently, the talent of the scientist resides as well in its communicating capabilities as in theoretical and experimental proficiency.

I came to dig a bit more about this topic while reading this article highlighting the need for a simplification of scientific communication. I agree that there is a problem of communication in science, but it may not be due to only the elitist style. If we want to better understand the issue, we have to consider the different types and levels of communication that the scientist has to deal with. The frontier between the different types is rather blurry and depends on the targeted audience and the purpose of the communication. But we can distinguish the following levels.

The first level of communication is the routine communication with his teammates, people working on the same topic and who aim at solving the same scientific problems. It is a highly specialized discussion where use of jargon is recommended to keep a high level of accuracy and avoid misunderstandings. The communication is in this case a mixture of equation writing, drawing, exchange of code and rational discussion. This is a difficult exercise because it is absolutely necessary to be sure that the participants to the discussion will share at the end the same understanding of the problem and of the possible solutions. From experience, a lot of time is lost because of misunderstandings. It is also difficult because the scientist often think that discussion with colleagues is a loss of time at the expense of pure individual thinking.

The second level of communication is the publication: it can be a report, an article, a digital notebook. The purpose here is to communicate in detail the method, the results, the analysis and the conclusions of the work so that your peers can try to reproduce, to falsify, to confirm or to improve your work. Therefore, it has to be clear, accurate and complete. This level is typically what is expected from a scientist. There is a lot of discussion ongoing on the problems of reproducibility, of peer reviewing and of journals impact factors but this is a little bit different story.

The third level of communication is the oral presentation. The purpose here is to attract the attention of the scientific community on your work, either to get collaboration, help, contradiction, funding.  An oral presentation is, by definition, limited in time and thus can focus only on a limited number of points. Therefore it cannot address technicalities. The communication has to highlight some key ideas, it has to activate some triggers in the audience to motivate them to look at your work in more detail (through communication of the second and first level). Honestly, given what I see during conferences this is an exercise which is, most of the time, poorly done. Slides overloaded with plots and texts, no coherent structure, no context explained, no vision. I suspect that most scientists fear that they cannot use storytelling and simple slides without being criticized for lack of rigor. There is a balance to find. A presentation, even a scientific one, has to be compelling.

The last level of communication is the communication with the public. Void. Blank. This is the ultimate difficult exercise. The hell on earth. And it has become worse in the last years. Before, the main contact with the public was through the media and the journalists and only some chosen distinguished scientists were allowed to talk to the journalists. So the difficult exercise of explaining science to a broad audience was to the charge of the journalist. Difficult because you have to find the compromise between the accuracy of the facts and the interest of the public. We touch here the heart of the problem: the scientific method (but not the results!) is fundamentally not attracting. By definition, it is rational and not emotional. Most people expect emotion. There can only be a conflict when we want to communicate about science. Anyway, with the development of Internet and o the social networks, the separation between the public and the scientists has faded out. We are now in position to talk face to face with the audience. And the audience expects a communication with the scientists, it expects him to play a social role, even political one when it tackles the topic of climate evolution or bio-technologies. This is a role for which the scientist is almost not prepared. The difficulty is even greater now that the society faces a problem with the facts. The exact reason for this phenomenon is unclear: the explosion of data, the increased complexity and hyper specialization of science, the degraded education. Whatever it is, people tend to pay less and less attention to facts, data and rational discourse (if you want some proof, listen to some well known politicians; a more in-depth discussion is to be found in Rhys Taylor’s blog). So the scientist is expected to speak out but the type of communication for which he is trained will not be heard. It can only end in a wrong way: either he shows viewgraphs on TV or he will moan “trust me!” (which is the worst thing to say in science). Honestly, I still have no answer to bring as for the behavior to adopt in this case. This is still an experimental ground. But the scientist must enter this ground and communicate with the audience and find strategies to make his voice loud and clear so that the public gets interested in science again.

The philosophical physicist

August 12, 2016

I could have called this post “The war between science and philosophy” or “the zero sum game” but I found it too childish to tackle a subject which is important for the future of physics. There was a recent update in the “discussion” of the role of philosophy in science. Massimo Pugliucci, Sabine Hossenfelder, to take the most recent insightful articles, took position on the claim that “philosophy is not useful to do physics”. As a baseline physicist (i.e. not one working on the fundamental questions of the universe), I have to react and say why I need philosophy. First, please excuse in advance my lack of clarity and of accuracy: I do not have the experience and talent of most participants to this debate. Yet I hope to convey enough of my message to make it useful.

I would first like to cut short one objection : that I am not a theoretical physicist working on “advanced subjects” like string theories, quantum loop gravity and thus I am not entitled to discuss this kind of fundamental issues. Indeed, I am a plasma physicist; I try to understand the phenomena occurring in a plasma, how it is produced, how it reacts to some stimuli. The most “advanced” tool that I use is Quantum Field Theory to calculate some in  the measurement of the plasma electric field in a magnetized plasma through Stark effect. Beyond that, I follow what happens in theoretical physics (I do not like this term because it implies a fundamental separation between experiment and theory) and I enjoy what I am able to grasp of the beauty of the constructions (as I enjoy the glimpse at the category theory or at the harmonic forms) but I have no practical experience there. Yet, I think that the reflection occurring at the level of theoretical physics affects the whole physics, whatever the domain, otherwise it would be a strong, if not deadly, blow at its coherence.

To address now the core of my ideas: as a physicist, philosophy is useful for me at two levels: first, at a practical level, because I am an human and not a pure rational machine and it is sometimes difficult to bridge the gap between the human part and the physicist part. Second, at a theoretical level, because the goal of a physicist, more generally of a scientist, is to understand the world as a whole and, unfortunately science fails at some point. Let’s examine these two points in more detail.

The job as a physicist is to apply the scientific method, which is characterized in the daily life by two characteristics: rationality and falsifiability. You take some assumptions, you derive a model from them and experimental predictions from the model, you do some tests and check if you validate or not the model. If not, you check that your chain of thoughts is rational and if it is, you change the assumption. So, basically, from the assumptions to the test/theory comparison, it is basically algorithms (sorting, pattern matching, tree traversing) in actions , except that for the moment only human brains can deal with the fuzziness of reality and the absence of clear-cut borders to the area of investigation, you can always find new ramifications to other topics and you have to expand your analysis. But computers are progressing fast and taking over a big part of this work.

But what about the assumptions, where are they coming from? By deriving them from other assumptions. Good, you see the problem. So, there is always a moment (or even several) in the day of the physicist, when all scientific methods are exhausted, where he scratches his head with a sigh. What is the practical solution there? he takes height: he tries to establish analogies with other problems, he conceives random or impossible assumptions, he drinks a coffee or goes to the theatre until the inspiration comes back. But the most effective solution is to go to the office of his colleague and discuss. And when the problem is serious (i.e. all scientific ways are exhausted), the discussion is of philosophical nature (even if not with the quality of experienced philosophers): he tries with his colleague to elaborate concepts with words. Who said that words were not accurate enough to do science? They are not as accurate as equations, but their fuzzy nature is of a lot of help when your mind is trapped by the rigidity of the equations. They give you the room to expand the mind and to discuss with your colleagues. How many scientists discuss only with equations? This is not for nothing that it is asked to reduce the number of equations in a presentation: they are a bad tool for discussion and presentations are an invitation to discussion. The philosophical discussion reduces the accuracy of the ideas but gives more flexibility and opens new areas. In this sense it is complementary of the scientific method. By the discussion (with yourself or with your colleagues) you explore new ideas and you establish new assumptions. When you come to an agreement, you apply the scientific method to them and the machine is running again.

This is also where you understand that experimental results are very useful, not only to validate or invalidate a theory, but to discuss: they are as fuzzy,  or even fuzzier, as words: the experimental between two experimental sets of data will never be perfectly linear, you will have some scattering which will invite to discussion: is it really linear? Should we add a bit of non-linearity to the interpretation? New ideas often happen from the discussion of experimental results.

This is why the scientists should be more trained to the philosophical method: this would improve their discussions and give the tools to elaborate concepts more easily before transforming them in scientific models. It will also probably improve the quality of the human relations and remind them that they are not purely rational machines (and maybe prevent some nervous breakdowns).

The second level of interest for philosophy is more fundamental. There is a point where the scientific method does not work when you try to understand the world when you live. Actually, it breaks for most of the daily issues (except if you live in a lab or your name’s Sheldon): your relations with the society, politics or your love affair. You can write a numerical model of your relation and test it. If the test fails, it will not be possible to change the model! Facing this situation, either you just live your life or, if you really want to understand, philosophy is the only possible rational way to approach the problem. This is only what you can do when you meet the absurd, as defined by Albert Camus in the Myth of Sisyphus: the absurd arises when the human need to understand meets the unreasonableness of the world, when “my appetite for the absolute and for unity” meets “the impossibility of reducing this world to a rational and reasonable principle.”. The worst moment for a scientist.

Of course, you can say that, in the end, physics will explain everything (we could discuss that, personally I am not convinced, not with the present tools), we are just limited for the moment by our ignorance. Sure but now is the moment where we live and if we want to avoid too much frustration, we have to use all possible rational tools to quench our thirst of knowledge or, for the least, to deal with the world.


About Drupal

July 19, 2016

Our plasma source project involves several teams across Europe. We wanted a centralized source of information remotely accessible. Our idea was to have an intranet where we could store the documentation, the to-do lists, a gallery of pictures and videos. And we needed a solution which fast and easy to deploy. After so quick trade-offs, we chose Drupal, which is based on a classical html/php/MySQL stack.


The big advantage is that, indeed, you get a polished solution very quickly, I mean in few weeks. Everything is controlled through the integrated administrator’s GUI and the online documentation is abundant. Its use is smooth and I had a very low down time.

So if you just want an intranet with standard features, Drupal is really the right solution. Yet, in parallel, we have developed our data processing system Gilgamesh which is based on Jupyter and thus on Tornado in Python. As a result we found ourselves with two systems with different architectures. Of course, they have different purposes but for some applications, it would be interesting that we have bridges between the two systems. For instance, in Gilgamesh, you can make references to papers in Latex style; it would be useful to reference documents which are in the Drupal system. In theory, it should be possible since the document reference is saved in a MySQL database and the document itself in the filesystem. But the architecture is so different that, practically, the interface is a nightmare to develop.

Therefore, in the future, and for the next project, I will avoid Drupal and start any Intranet on a Tornado solution. In this case it will be easier to integrate it in more complex systems like Jupyter.


JupyterLab: first review

July 15, 2016

A pre-alpha version of JupyterLab has officially been released: you can see the details of the reasons and advantages on the Jupyter Blog and on the Bloomberg Blog. You will find there the slides and video of the Scipy2016 talk.

I wanted to give a first review of this new version of Jupyter. I have indeed installed it for our Gilgamesh Data Processing System and tested a little bit.

There are two parts: the user view and the developer view.

JupyterLab for the user

The first feeling at the start is that you get a clean desktop application in your browser: you have several movable panes, you have icons to start the application you need: notebook, console or the about panel. And you have the file manager which is FAR better than the Jupyter dashboard: you can move the files between the folders, you can drag and drop. It is very practical. You have easily accessible help pages and you can move your notebooks or consoles in panes side-by-side.

Graphically, it is not yet finished: I find the color scheme a bit dull. But following the activity on GitHub, the designer are working hard on improving that.

There is one usability issue in my opinion: the menu with the commands. Why is it on the side of the file manager outside the notebook? It is not intuitive at all.

As for the notebook itself, I am not quite sure but I have the feeling that the display is a bit slower than the classical notebook. But this is to confirm on a daily use. And in any case, it does not disturb the manipulation of the cells.

Thus, we have there a useful product with clear improvements other Jupyter. There are glitches but we have to keep in mind that this is only a pre-alpha release. It is already a high level of quality for such an early release. In addition, we have to understand the philosophy of JupyterLab: it is not and end product, it is an infrastructure to connect your plugin and develop your own product tailored to your needs. This is why it is important to see what is under the hood.

JupyterLab for the developer

First a note of caution: I am not a high level  front-end developer; so this review is based mainly on my comparison with the front-end of the standard version of Jupyter.

The main idea to note: JupyterLab is a front-end: there is not a single part of the code that changes the python server side (based on Tornado). So basically you can run Jupyter and JupyterLab on the same instance of the server (you just redirect to the right webpage to get interface you want).

It is based on TypeScript and on the PhosphorJS which provides widgets (menus, frames,…), messaging between objects and self-aware objects a la traitlets (when their properties change, they fire signals). The result is a very clean structure, modular and logical. You build your application by assembling plugins and widgets. The communication between them is almost automatic (almost!). The communication with the Jupyter server is realized through the jupyter-js-services API (which still is a bit confuse in my opinion, but this is more related to my limited abilities in JS programming).

What I have no tested yet is the use and development of ipywidgets and how the backbone architecture is integrated in the JupyterLab architecture. But I think it can go only in a better direction.

To conclude JupyterLab offers a set of front-end tools to easily modify or extend the Jupyter notebooks: if you don’t want a console, you can remove it, or you can add your own one, you can add notebooks with special layouts (for presentation or dashboards) or you can imagine more exotic plugins. For instance, for Gilgamesh, I am developing a plugin for a kind of “JupyterTalk”: the notebook is not saved anymore as a file but in a database. Several users can connect to it, each having his own kernels and type their own cells (each cell is identified by the username). But the display is common to all users: you see your cells and the cells from other users. So you get a chat with a succession of messages, which are more than text but real Jupyter cells (in markdown or code) with their output. So you can have a discussion like in a chat but with the power of a kernel behind to display data, run algorithms.  This is something made possible by the flexibility of Jupyter. You can have Augmented Discussions.


JupyterLab is the next step on the way to develop an ecosystem instead of a simple application. This looks like a bright strategic development and I am eager to see what will come out the imagination of the community. I think there are many possibilities open far beyond the notebook. JupyterLab is a new layer above the Operating System: it is the Computing System in charge of connecting the user with his kernels to support and enhance his work. Kernels can be languages but also interfaces with hardware (a python kernel on a Raspberry Pi can give access to the GPIO ports and the associated peripherals). Therefore it will offer to your narrative computing access to data,algorithms and hardware. Very promising. Good Job Jupyter developers!

%d bloggers like this: