FPGAs for the average engineer

October 4, 2016

You know Raspberry Pi, Arduino, Throw them away, these are toys for children. System on Chips are for adults.

Ishtar Plasmaantenne

FPGAs to power a plasma source

I am kidding, I love the Raspberry Pi, but what the SoC offer opens a new dimension. Let’s take the example of the Red Pitaya: its two main features: A zynQ 700 which combines a FPGA with a double core CPU and 2ADC-2DCA at 125 megasamples per second which make you possible to receive and emit signal in the MHz range. For a price of around 300 euros. This means first that you have a fast digitizer available and you can play in the radio range. And second, this digitizer is connected to a FPGA so that you can do processing operations like FFT, filtering and so on at MHz speed! This is really a great tool, not only to learn but also for practical applications. I use it for instance to generate a signal, which is amplified by a 1kW amplifier and injected in a plasma to investigate the propagation of waves in it. This is super easy to code in C and python, you can use the GPIO to get trigger signals or activate other systems, you can integrate easily in a global control system. I use it as well to measure high frequency instabilities in a tokamak plasma with a realtime FFT to reduce the required amount of data to store.

In standard, it comes with a free version of Vivado (i.e. missing all high level features but fine you do not really need them). The most difficult part is to install it and to support its xxx GB of required space. The program itself is not buggy (at least, not at the level I use it) and you can really learn how to code hardware in Verilog or VHDL: this is rather exciting when you understand how it works and that you start to see gates and flip-flops through the code.

The big advantage of the Redpitaya is that it is open source. Vivid and Xilinx provide also a lot of documentation. So, when a problem occurs (which happens every two minutes at the beginning), you have resources to find the solution rather easily. I would like to give here the most interesting links where to learn about the hardware:

  • Red Pitaya Notes by P. Demin: this is the big big big reference. There is a bunch of interesting projects, with SDR, even nuclear magnetic resonance and a clean, version-controlled method to manage the FPGA code and the associated linux ecosystem.
  • Vivado’s youtube tutorial by M. Sadri: everything about Vivado from simulation to debugging. It takes time to get through them but this is no loss of time
  • Building Linux on Zynq: this basically teaches you how to install an embedded Linux and the roles of the different components from the boot to the shell.

Beyond that, you can start to have very interesting stuff: building bare metal applications which do not require an OS, you can try to use Rust to gain in security and develop your own flexible and optimized PLC that suits your needs and not the bank account of big instrumentation companies.


Stairway to Heaven

September 30, 2016

This was this week’s hype in the aerospace industry: Elon Musk presented his vision for reaching Mars and beyond. There are countless analyses and reviews of his presentation (here for instance for a technical one) weighting either in favor or against Musk.

spacex-interplanetary-transport-europa.jpg

The ITS on Europe. Credit: SpaceX

I got several times the question by friends, colleagues about what I , as an ex space propulsion engineer, was thinking about the feasibility of this vision. The bare answer is: I don’t know. I mean, there is not enough information in this presentation to evaluate the feasibility of the Interplanetary Transport System (ITS). I wonder how pundits can get an opinion on that. There has been countless Mars mission design proposals in the past. This one is not really different. It is both credible and far-fetched because written on the same model: you assess the requirements (in terms of costs, mission duration, target), you take the existing technology (to be credible) and you extrapolate it to meet the requirements (and it may look far-fetched or not, whether you are part of the proposal team or not). So basically, here, SpaceX develops the cost model to have almost routine trips to Mars (very cheap but it is a target – it makes sense to have something cheap if you want to “democratize” space); it takes the existing technology, a bit improved (the Raptor), the reusable launcher (complete reusability instead of only first stage) and it extrapolates the system (increase the number of engines, huge composite tanks,…) to be able to have a cheap transport. This is what was presented. There is no new concept, no really new technology.

So, how can you assess the feasibility of the mission? You cannot because there are missing data on the critical part: the execution. And in the space industry, the execution is the key from failure to success. What methods do they want to apply? How do they want to adapt their organization, their team, to meet the challenges? What new tools will they use to transfer this concept in reality?

If you think of it, SpaceX has not invented new technologies or radically new concepts of missions. They have taken existing ideas that other private companies have also taken (vertical landing – McDonnell and Blue Origin, space capsule with Orbital). I assume that NASA played an important role for the transfer of technology towards private company and that they didn’t need a huge effort of research and development. But what Musk did and this is a huge change, was to set up a modern organization managing both the system and the underlying technologies (propulsion, GNC, actuators), something that the Big Players like Boeing or EADS didn’t bother to do because technology is low-level. Adding to that modern IT tools to automate the manufacturing and production, it was possible for a relatively small team to develop and optimize in a very efficient way the construction of a new, partly reusable launcher and the associated space capsule. In the case of the Mars mission, there is no indication of what they will do in terms of organization, of how they will scale their methods to accomplish this challenge. For instance, they showed this big composite tank. Nice but how did they build it? The difficulty is to create an industrial robot which is able to loom that for big series while respecting the tolerances required. No word about that. Yet, this is where the feasibility of the project can be assessed. But this is also the heart of SpaceX. I understand that Musk does not want to reveals his trumps.

So, what about this presentation? What is the purpose of it if it is not to present the technical details of the project? In my opinion, there are two goals, one external, one internal.

Externally, you have to create the proper spirit for this kind of expensive endeavor. So this is a classical strategy when you want to sell a project where you know in advance that people are not convinced or concerned: you show far in advanced the most  advanced and incredible version of your project; the first time, people will say he is crazy; the second time, they will say no, the third time: “mmmm”, the fourth time: “why not…” and so on until they completely change their mind and say: “let’s go” and sign the check. People need time to get used to a crazy idea. Very probably, you will not get what you asked for at the beginning, but a limited version which will correspond to what you actually wanted. This is a very effective long-term strategy to fund new experiments. I can completely imagine that it is what Musk wanted to do. People will start to think and rethink and rethink. When the negotiations for the funding will arrive, the ground will be ready and people will be used to the idea. Probably, creating a new civilization on Mars is not really his ultimate dream (on Mars really? why not in Siberia? Or in North Dakota – I am kidding I love North Dakota). If he manages to get a first crew there under the flag of SpaceX, he will have written his mark in the sand of history.  Anyway, his rhetoric must revolve around the idea of colonization and not of exploration to avoid the major counter-argument of manned spaceflights: the robots! If he wants to send people to explore, his opponents will want to send probes which are probably more efficient for this work. But if he wants to create an interplanetary species, there is nothing to oppose: you touch the heart of mankind as a group of settlers.

Internally, the goal is easier to understand: to create the right spirit at work. You do not work on a rocket that sends communication satellites for whichever investment fund. You are working on an interplanetary crewed spaceship. This makes a huge difference. You are part of the conquest of space. In these conditions you can work 24/24 8 days a week.

To conclude, the presentation makes sense in terms of communication strategy, less in terms of feasibility of the concept. If you are not an insider, you have to believe or not. As an outsider, I believe my instincts and my centers of interest: I find chemical propulsion a bit boring 🙂 I admire these massive and loud engines like these old steam locomotives; they are jewels of engineering. But I am more attracted by electric system and other more exotic phenomena. I believe (! I have no way to demonstrate it yet) that there is a huge amount of energy to tap in and that the proper way to engineer them still has to be found. In addition, with cheaper and cheaper earth to orbit transports, it becomes to test riskier technologies. This will be a funny time!

 


Hackerlab

June 25, 2014

There are more and more talks of open or citizen science. For the moment, the main focus is on the publishing system and the way to remove it from the hands of a bit too greedy professional publishers. Two other aspects are the experimentation and numerical science, two money eaters of first class. There is a lot of to say about publishing and numerical science, but I want to focus today on the experimental part and how the maker movement is about to “make” things change in science, provided that we address the right type of issue.

We don’t need to be a fortune-teller to foresee that giant experiments like LHC or ITER or NIF will absorb more and more of the public funding for science. They require money, manpower and a lot of paperwork, changing the way scientists are dealing with experiments. I have to be clear: these experiments are useful and enable to develop a lot of spin-off technologies. The problem is that small or medium-sized experiments are cancelled because of the resulting lack of funding. And believe me, there are a lot of things to learn from room-sized or table-sized testbed. Actually, it is even the only way to keep the contact with reality.

If most institutes or labs start to give up the work on this type of old-fashioned experiments, it can be an opportunity for citizen science. The idea would be to have hackerlabs dedicated to one or several experiments, with access for everybody, just like a hackerspace. You go there to learn how to build a testbed, to carry out experiment, to imagine new experiments. All this with the support of a team of professional experimenters and access to a full-fledged workshop.

What do you gain with respect to a classical lab?  First, independence and flexibility: you choose your hackerlab, your experiment, your objectives, your agenda. Second, you keep hands on real stuff: you learn why experimenting is hard: why it is not enough to push a button to get ready-to-use nobel prize-graded results. Third, you can use as template the structure of the maker world, inclusive the communication system, to present your experiments, your results. You can even imagine a remote control of your testbed, creating your plasma discharge from your bed (I used to trigger my digitizers from the seashore, the best place to think).

And you would not have to justify in advance the choice of every technology you use (“because it’s fun” has always been a bad justification in the academic world). Finally, a good place to use Google Glasses integrated to your experimental process!


ITER facts

February 22, 2012

ITER will be the biggest tokamak to be built to test nuclear fusion with magnetic confinement.
It is also one of the biggest experiments to be built.
M. Merola, the head of ITER Divertor group have found good comparisons to make feel the order of magnitude of this machine’s size.
You can find details of his presentation here:

http://www.fusioniteropportunities.org.uk/presentations/2010/Merola_2010lo.pdf

 


Efficient Mega-Engineering (Part 3): preliminary studies

September 13, 2011

In my opinion, preliminary studies practically have only one single purpose: to optimize the ratio cost/benefit of the project; by “optimize”, I mean to make this ratio acceptable by the potential project funders; in the case of mega-projects, these funders are the governments. This is a long-term cycle (cycles are anyway the main ingredient of engineering and design) where you will have to find a preliminary design with benefits high enough to justify the cost.

The problem for a mega-project is that, at the beginning, you don’t have a single idea of its cost, even if you already know the main features of the design. Indeed, this kind of project involves new technologies, which are sometimes still in development, has a level of complexity with several layers of subsystems. Without in-depth analysis (which is not possible at the pre-design) you cannot estimate the price. What you have are references: other projects that can be compared with: Apollo, LHD, and so on. You have a feeling (but just a feeling, it is a question of flair) of the size and complexity of your project compared with the others, you have an idea of the benefits expected by your funders and you imagine an acceptable cost, yes you imagine it and you release it and you wait for the feedback. This is the first, long, step of the cycle. In this step, it is necessary to elaborate very good connections with funders and policy-makers because you have to find out what the acceptable cost is and to make the benefits of your projects interesting for the people with money and power; there is here almost no engineering (of course being a talented engineer gives credibility but being a talented politician is even more important).

During this cycle, you will have a feedback on your first figure: it will be negative: too high, of course, for the expected benefits. What you will have to find out is the cost and the benefits which are acceptable. Hard task, because it is influenced by the present and future economic and political situation, by the public perception of the project. Preliminary studies will have here a twofold use:  as a tool for lobbying (you will explore new designs which are more appreciated by the public, you will include technologies that are strongly supported by the industries of the governments you target) and as a tool to decrease the evaluated costs. This second use will mainly take the form of trade-off studies to show the pros and cons of different designs and the impact on the final budget.

This period of lobbying and negotiation, generally spanned on a decade (or more), will end, if succesful, with the opening of official requests for proposals by one or several governmental agencies: this is the sign that they acknowledge the potential interest of the project. And what is interesting in this part is that some money (but not much) starts to flow in the project. The principle is the following: the agency says: “well I have heard about a project with an interesting design, can you make some basic calculations to see if it is realistic and how much it costs, here are some bucks” and you answer “oh ok it does not look too bad (of course, it is your idea)  and in addition we have some background in the topic (of course, since you worked out the idea)”. And you start to formally develop your preliminary design: a clear work breakdown structure, you evaluate cost on each sub project, you develop several different designs so that the agency has the feeling that it will be the one to choose the best design, you prepare a roadmap and you anticipate the necessary spinoffs expected during the development of the project so that the agency has short-term milestones. To make things a bit more complicated, you are generally in competition with other groups (which try to prove that the project has no value and it would be better to work on their own projects) and the allocated budget is not enough to cover the costs of this phase.

This second cycle is interesting, because you now have a better idea of the cost expected by the funders but you also start to have a better idea of the REAL cost of the project. Indeed, in this preliminary studies, your team will start to dig in the different components to assess the cost of their development and you will be able to start to use economical models to obtain a realistic figure. And at this point you will realize that the real cost is far beyond the expected cost.

And what you  do now, will have an impact on the whole lifetime of the project, it will explain all its delays and even its failure. What you  do, after you have considered the gap between real and expected cost, is to ask your team to trim the design. By doing that, you will be sure that your project will have delays and flaws (thus extra costs) because the design you will propose in the end does not correspond to the one necessary to achieve the objectives. What is hidden behind “design trimming”, is something that most engineers do not understand: they have worked out a design, its cost and now the project manager comes and says: make a new design and less expensive. The art of trimming a design would need an entire post to be described (I will do that later); I (and a lot of my colleagues) have always thought it was an offence to the job of engineer and a sign of bad project management. I realize now (but I am not still completely sure) that it is a normal part of the project life and that it is not avoidable.  If you want your project to start, you have to accept a degraded design with an acceptable cost. The difference between a good and a bad project manager is the way you trim your design.

So, at the end of this second cycle, you have a design (good or not) and an acceptable price to pay. Now the project can start.


Efficient Mega-Engineering (part 2): birth of a project

August 8, 2011

A project can be seen as a compound of two ingredients: physics and engineering. It is a distinction that I dislike but which is all the same useful to understand how big project starts. In nuclear fusion, the physics tells which plasma configurations are the best to keep the particles confined and reach the ignition and engineering tells what kind of magnetic coils and of infrastructure is necessary to achieve this configuration. There is actually a balance to find between physics and engineering: the less  you understand the physics, the more you have to use heavy engineering to palliate this lack of knowledge.

Spheromak

We can take the example of toroidal magnetic confinement configurations for fusion research: one possible solution is the spheromak, where the plasma self-generates its own magnetic field, a kind of dynamo effect. It requires almost no external structure to keep confined; the problem is that it is in a permanent turbulent state which is hard to understand and to control; as a result, its confinement time is quite low (and the reduced amount of time and money accorded to this kind of projected prevented any significant progress on this type of facility). The solution chosen was to reduce the freedom of plasma by containing it inside a magnetic field. A lot of more engineering is involved and to limit its complexity, an axi-symmetric configuration was favoured; it was the birth of the tokamak. The problem is that this configuration is stable only if you induce a toroidal current inside the plasma, which has a deep impact on its physics (creation of instabilities). Therefore, another idea was to go a step further in engineering complexity with the stellarator and to give up the idea of axi-symmetry by twisting the magnetic field so that no more plasma current is necessary for the confinement. This short overview of the different types of fusion facilities show the difficulty to find the right balance between engineering and physics.

Tokamak

Aerospace is also a significant example: what prevents us from reaching Mars or even the other stars? The fact that it is impossible to find a balance between physics and engineering. Either you want to use a well-known physics based on chemical or electrical propulsion and, in this case, the cost of engineering necessary to solve the obvious shortcomings of these methods is too huge to be realistic. Or you want to use advanced physics (antimatter, warp drive or whatever exotic engines) and you areconfronted with the lack of knowledge.

Consequently, a project can start when the physics is sufficiently understood to be implemented in an engineering infrastructure with a limited level of complexity, i.e. which is tractable in terms of cost and of management (of interfaces).

Stellarator

Different scenarios can happen and trigger the start of a project: an unexpected discovery (for instance the H-mode confinement in tokamaks in 1982), the improvement of the technology (advances in superconductors), improvement of engineering tools (CAD, collaborative frameworks) and so on. In most cases, we have iterations over long times where both physics and engineering indicate the direction to follow in their respective fields of research.

One major difficulty in mega-projects is that the physics is multifaceted, involving many areas of interest with different conceptual tools; people in charge of preliminary designs need to have a large general culture both in physics and engineering and adequate tools to survey experiments and theoretical works with a possible impact on their projects.

The pre-design of a project is the first milestone in the connection of physics and engineering. We will see in a next post that it is the point where most of the difficulties met by a project in the later steps are rooted in.


Efficient Mega-Engineering (part 1)

July 22, 2011

Well, you probably know it, the space shuttle’s era is now over. Like many other space enthusiasts, I wonder what the future will be about: commercial space access, tourism are certainly part of this future, with, more particularly, the prodigious development of SpaceX and its launcher Falcon. But this is not what interests me in space, I like the exploration part, the discovery of new horizons, the possibility to travel even further away from mother planet. Consequently, I like projects like Icarus which thinks about the design of  an interstellar probe. With the present knowledge, it sounds unrealistic, at the limit of science fiction, but it is where the dream is, the excitation, the motivation.

This kind of project is what I call Mega Engineering: a project at the limit or even beyond technological or physics knowledge, with highly multidisciplinary interactions, all packed in a complex system, where several countries have to participate with intricate political issues. What is the difference with Big Engineering like the development of skyrockets, space shuttle, space telescopes, particle accelerators? These examples are mainly based on proven technologies and physics, the complexity comes from putting all these technologies together, the difficulty is there the system. Mega Engineering complexity comes both from the system and the technology and the physics. I think that nuclear fusion reactors can be put in this category, interstellar probes as well and even economical earth-to-orbit transportation. All these projects are based on a physics which is difficult to grasp, on a technology which is not mature and on an elaborate architecture.

I would like to have a look at the different stages of the development of a mega-engineering project, the difficulties associated to them,  and to explore the potential solutions to overcome these difficulties. Actually, for each problem, I will present two types of solution: one soft (by applying and improving existing methods) and one hard (nearing methods from science fiction).

Please, stay tuned for the first part, how the idea of a mega-engineering project comes to the light.


Nuclear fusion: the big picture

December 13, 2010

As an engineer and a physicist, I have to deal most of the time with the details of fusion: how to optimize a particular component of one of the heating systems, how to understand the distribution function of fast ions during some MHD event. Busy with the intricacies of the day-to-day work, I often forget the long term purpose of nuclear fusion: to produce energy for the grid in an effective and rentable way.

A short reminder of the big picture is thus sometimes welcome. I will summarize in this post some guidelines, the reader interested in the topic should refer to the presentations and articles of David Maisonnier from the European Commission or Harmut Zohm from The Max Planck Institute for Plasma Physics who have put a lof of effort in “vulgarizing” fusion research: Power Plant Conceptual Studies in Europe, Overview of Reactor Studies, on the minimum size of DEMO.

The purpose of fusion research is to develop a Fusion Power Plant (FPP).

I will not tackle here the subject of the reason why we need fusion as a source of energy: it si a very controversial issue where emotional and political inclinations play an important role. If you are interested in the subject and want to  make your opinion, please visit the site of the International Energy Agency, where the World Energy Outlook 2008 is free for downloading. A very good basic physics approach is also given by David JC MacKay. If I have time later, I will try to enter this debate more in detail but, for the moment, I consider the three basic assumptions for nuclear fusion. You can agree with them or not, but this is the start of the studies on Fusion Power Plants:

  • Fusion is a relatively clean source of energy
  • Fusion is a safe source of energy
  • Fusion fuels are available for everybody (energy independence)

The EU Fusion Programme is reactor oriented, one of its purposes is to clarify the nature of a FPP (Fusion Power Plant), to identify the technological and conceptual gaps between our present knowledge on fusion and what is required for a FPP and to establish a plant to bridge these gaps.

The European Power Plant Conceptual Study (PPCS), finished in April 2005, identified 5 types of power plants, all based on tokamaks in steady-state mode, ranging from limited to advanced extrapolations in physics and technology.

A typical Fusion Power Plant has to meet the following main requirements:

  • concerning safety and environmental aspects, there shall be no need for emergency evacuation, no active system for safe shutdown and no structure melting down following an accident, minimum wastes to repository . This requirement is the main difference with a fission reactor: an accident of a fusion reactor does not have to impact the  population. The last specification on wastes is, I must admit, vague and it is a weakness of fusion power plants working with Tritium: materials are activated by fast neutrons. But the amount of radioactive wastes will depend on the architecture of the power plant and of the nature of materials used. The purpose is to reduce it as much as possible.
  • The operation of the plant shall make it possible to produce 1GWe as base load with an availability of 75% to 80%, and a few unplanned shutdowns per year.
  • Economically, a fusion power plant shall be more expansive than other “acceptable” energy sources.
  • Last point, the solution should be accepted by the public.

The standard solutions are based on the ITER design with some minor extrapolations IPB98y2 scaling law (this scale “predicts” the confinement of energy in function of the machine parameters).  More advanced solutions  bet on improvements in performance: better confinement, strong shaping for a better current profile control and minimization of divertor loads.

The technology is based on the same coolant for the different of the tokamaks (either water, heliium of even LiPb):  on the divertor with fluxes between 5 and 15MW/m2 (depending on the solution adopted), and on the first wall: 0.5MW/m2 on average 1MW/m2 peak.

The blanket structural material is EUROFER, a low activation ferritic-martensitic steel (550deg max. temperature), with a lifetime of 150dpa (displacement par atom). For an average nuclear load of 2MWa/m2, it corresponds to 5 full-power years.

The vacuum vessel is in stainless steel AISI 316LN, water-cooled and must be re-weldable to allow major repair operation.

The last critical technology concerns the magnets: they are assumed to be like those on ITER: low temperature superconductors Ni3Sn or NiTi cooled with liquid helium. The lifetime is set by the irradiation limit of epoxy insulation.

These are all the basic assumptions used to model Fusion Power Plants: it does not that they define the future design. These models are developed as guidelines, to get order of magnitudes of achievable efficiencies with the present knowledge. In other words, they make it possible a parametric studies of fusion power plants: playing with these models show the weight of the different parameters and help the designer choose the most sensitive.

To give an idea of the results, the most conservative design gives an efficiency of 30%, the most advanced 60%. The problem with the low efficiency of conservative design is the amount of power used for active Helium cooling and for additional heating.

I stop here for today, this should give a small overview of the first steps needed to build not an experimental reactor but a real fusion power plant and how difficulties and technology gaps are evaluated.


Deutsches Museum

December 13, 2010

I must say that I have already been a LOT of time in this museum, one of the biggest in the world for technic. But I had never much time to get interested in all the various or even to take pictures of them. Indeed, most of the time, I was with my children, who have a dedicated section, where they can play with water, with a real fire truck and many other  toys.

Now that they are a bit more autonomous and interested in their environment, I have some free time to enjoy the exhibitions: it is more than both entertaining and  informative: it is an ode to the technical realizations of mankind. Here no shame on engineers and technicians: their genius or even their simple ideas to solve practical problems is presented and explained in a simple and attractive manner.

Naturally, I took a look at the astronautics section: of course, it is not comparable with some museums in the US (for instance, the one in Huntsville, at MSFC), but they are some nice stuff to discover (a small piece of moon rock offered by the US government to Germany), the combustion chamber of an Ariane’s Vulcain 2.




Some books on big projects

July 6, 2010

I like books that mix the human adventure with the technological background. They illustrate how giant projects are achieved. The difficulties in technic, management, human relationships. This is the opportunity to learn a lot (although it will never replace the day-to-day experience). I give here some books I enjoyed a lot:

  • Chariots  for Apollo: the history of the Apollo program with all the behind-the-scene stories and political issues. A must-read for space aficionados.
  • Giant telescopes: a book by P. McCray on the construction of the Gemini telescopes
  • Science of JET: yes, tokamaks also have a reference book by J. Wesson. The construction of the biggest tokamak, the technical pitfalls, the discoveries. And it si free for downloading.
  • X-15: Extending the Frontiers of Flight : a free book from the NASA about the adventure of the fastest plane of all time. More than 600 pages of pure entertainment.

I will complete the list if I have further ideas or suggestions.


%d bloggers like this: