Scientific publications

February 28, 2018

Preamble: I wrote a draft of this post some months ago but did not publish it because of I found it at first lengthy, a bit too ambitious in its goals and not conclusive enough to be truly relevant among the large set of articles daily published on the topic. Yet, after stepping back from the area of research, I realize that these criteria do not matter. This post summarizes some ideas of mine at a given point and can all the same serves as a basis for further thinking.

This is the beginning of a series of posts to establish an overview of the state of applied physics research. This is mostly to get for myself clear ideas of the good and bad aspects of this field of research based of on my experience. This experience is, by nature, limited to the area of plasma physics and to what I have read from other related fields. This is an opinionated view and in no way a methodical scientific study.

It starts here with the scientific publications which are the visible parts of the iceberg and attract most of the attention from a wide audience.

The researcher has three types of relations with publications: as a reader to get information, as a reviewer to validate the results of colleagues and as a writer to present his own results. The failures and advantages of the present publishing system will be detailed for each type of relation and an attempt to draw some conclusions about what can be done or is already done to improve the process. I will also highlight the issues which are direct effects of the structure of the academic system and which cannot be solved by just changing the way articles are published.

Publications as a source of information

When you start to study a topic, you have to “stand on the shoulders of giants”. Publications are these shoulders. They provide the information you need to understand the status of your area of research: what is known, what is not known, what are the issues, what is unclear. The present research has four sources of information to access this existing knowledge: books, articles, internet and human relations.

  • The purpose of books is to cover a well established body of knowledge: you will find the tools, mostly theoretical, to use for your research. The problem is that books for specialists are very expansive, mostly only available on paper. Institutes have libraries where you can find them, but this is a very analog process. You have to go there, find the book, wait if it is already lent, get it for a limited amount of time. When you have to browse a lot of books, it is inconvenient. Fortunately now, you can now find on internet scanned versions of most of technical books. Yet, there are in PDF formats. But better than nothing.
  • Articles present the state of the art of research, topics which are not 100% sure but with new ideas, new data. The purpose of research is mostly to validate existing articles, or to invalidate them and to propose alternative solution or to unify several articles. This is your daily bread. All articles are now online. When you work in research in western countries, the price is transparent for you. Only the administration sees what an article costs. It is not the case in other countries and the success of SciHub show how big this price is. But, in my opinion, it is not the price in itself which is too high, it is the ratio service/price of the publication which is too low. Because, the question is: what do you get in a publication: a PDF file, most of the time limited in pages with the assurance that two or three people read it before and validated it. That’s it. But you get only an assurance, not direct insight in the reviewing process, you don’t know what issues where raised, how they were answered. You don’t know if other people find problems on the paper or confirmed its validity. Everything is completely opaque- In addition, a paper is mostly text and few graphs and schemes. You don’t have any direct access to the data, to the exact experimental protocols or codes used to get the results. Yet, technological solutions exist to make the process more transparent and reproducible but it is not used for this purpose. Technological tools are mostly used to streamline the publications in their present state and to make their number skyrocket. It has never been so easy to publish articles. You could do that every month and some scientists just do that, because it is better for the career. So you get overwhelmed by the amount of articles with a very low signal to noise ratio. Scientific publication is at the age of Youtube comments. This is where the two remaining sources of information play a role.
  • Internet offers knowledge beyond articles. You can find blog posts by fellow scientists who talk there more freely about their research and where you can catch some details which where missing from the publications. You can find contributions by laymen who just have fun with technical stuff and spend a lot of time shooting videos with their GoPro camera to explain some obscure electronic construction, which is of utter importance to develop your experiment. The information there is not organized, not well structure but offers a wider range of innovative solutions to expose scientific content.
  • The last source of information is your social network, your colleagues, your fellow scientists, the guy or the lady who you discuss with at the coffee and who will offer you his or her experience and contacts to explain you something you did not understand. They represent the unstructured informal channels of information communication. This is a precious source, which is both underestimated (the myth of the lone genius scientist, but this is another story) and very hard to reach. It requires competencies in networking, team building, public outreaching, which are far from being a major of the scientific education.

Publications for reviewers

The second relation that a scientist can have with publications is when he is requested to review them. From my experience and the one from people I know, this is most of the time a gratifying experience because of two aspects: first, it is a recognition of your expertise and second, it brings you to topics related to your area but where you wouldn’t have necessarily spent time studying. It opens an opportunity for the curious people. So, even if it takes time on my planning, it is always a pleasure to review another paper. Yet, what I miss is the lack of real discussion with the author. You have two or three passes of questions/answers and that’s it. In my opinion, the process is not iterative enough and not enough open to more people. The counter arguments are usually twofold: first, versioning could bring instability to the system: without a definitive version of paper, there is no solid ground, no reference which can be used to progress further. This is true, but only partly. Indeed, psychologically it is very helpful to have a finished and published paper. You have the feeling to get something done and it can be used as a showcase for your work. Actually you know the approximations, the uncertainties present in the work. But you wipe them away and get a boost of energy for the next publication. But, like in software, you could imagine major and minor releases, stable and unstable editions. That would perfectly fit the research process. One paper which is improved, enriched, extended following the progress of your work. The second reproach would be that open review by a wide audience would lead to less review because nobody would feel responsible for it and would not spend time for that. This is also only partly true. It would be necessary to have main reviewers like you have main contributors on pieces of software. They would be in charge of the main review. But, I am pretty sure, that if you open the review to the community, and you enable a rational dialogue between the authors, the main contributors and the community, you can dramatically improve the quality of papers. There are many publications where I would like to comment, to ask, to suggest. But it is not possible in a systematic way. You can do that at conferences when you meet the authors but this is informal and most of the time, without any follow-up. The key for success is to create the proper framework and adequate tools to facilitate these processes.

Publications for authors

You have managed to get results and you want to broadcast them to your scientific community. There is only one way: to publish in a journal. You are here confronted with several issues: the choice of the journal, which is mostly defined by a compromise between the relevance of your results, the impact factors and who your co-authors are. Then, you are constrained by a format: number of pages, non-interactive, text and images only. This is not necessarily a disadvantage. For instance, I find the Physics Review Letters absolutely awful for the reader. How useful 4 pages can be? It is like understanding the Syrian crisis only trough a dispatch from a press agency. But for the author, it is a very interesting exercise: it forces you to extract the essence of your research, to determine what makes your results interesting and nothing else. It brings a lot of structure in the scientific thinking. Yet, a lot of this effort is spoiled by the time needed to format the article. No publisher has ever spent time and money improving publishing tools, plot creators and other useful editing framework. At the best we get templates and Latex libraries. All interesting software come from outside publishing. This is also true for most research institutes which push to more publications without trying to improve the editing process itself.

The future of science publishing

There are tools of all sorts to improve the communication of science. Yet, the situation looks like stale with disagreement and divergent interests between the stakeholders of science (researchers, institutes, publishers). As in engineering, there are two ways to design a new system of science communication: top-down with an initiative of the decision-makers; this can happen with a change of generation of science leaders with people who have only known a publishing system in crisis and aware of the problem and of the possible solutions. Or bottom-up with the self-organization of scientists who collectively manage to agree on the standardization a more effective way to communicate and validate the scientific knowledge. Solutions will probably emerge, if they emerge, from both directions and will require time and patience. But these two elements are the most effective weapons of Science.

Advertisements

FPGAs for the average engineer

October 4, 2016

You know Raspberry Pi, Arduino, Throw them away, these are toys for children. System on Chips are for adults.

Ishtar Plasmaantenne

FPGAs to power a plasma source

I am kidding, I love the Raspberry Pi, but what the SoC offer opens a new dimension. Let’s take the example of the Red Pitaya: its two main features: A zynQ 700 which combines a FPGA with a double core CPU and 2ADC-2DCA at 125 megasamples per second which make you possible to receive and emit signal in the MHz range. For a price of around 300 euros. This means first that you have a fast digitizer available and you can play in the radio range. And second, this digitizer is connected to a FPGA so that you can do processing operations like FFT, filtering and so on at MHz speed! This is really a great tool, not only to learn but also for practical applications. I use it for instance to generate a signal, which is amplified by a 1kW amplifier and injected in a plasma to investigate the propagation of waves in it. This is super easy to code in C and python, you can use the GPIO to get trigger signals or activate other systems, you can integrate easily in a global control system. I use it as well to measure high frequency instabilities in a tokamak plasma with a realtime FFT to reduce the required amount of data to store.

In standard, it comes with a free version of Vivado (i.e. missing all high level features but fine you do not really need them). The most difficult part is to install it and to support its xxx GB of required space. The program itself is not buggy (at least, not at the level I use it) and you can really learn how to code hardware in Verilog or VHDL: this is rather exciting when you understand how it works and that you start to see gates and flip-flops through the code.

The big advantage of the Redpitaya is that it is open source. Vivid and Xilinx provide also a lot of documentation. So, when a problem occurs (which happens every two minutes at the beginning), you have resources to find the solution rather easily. I would like to give here the most interesting links where to learn about the hardware:

  • Red Pitaya Notes by P. Demin: this is the big big big reference. There is a bunch of interesting projects, with SDR, even nuclear magnetic resonance and a clean, version-controlled method to manage the FPGA code and the associated linux ecosystem.
  • Vivado’s youtube tutorial by M. Sadri: everything about Vivado from simulation to debugging. It takes time to get through them but this is no loss of time
  • Building Linux on Zynq: this basically teaches you how to install an embedded Linux and the roles of the different components from the boot to the shell.

Beyond that, you can start to have very interesting stuff: building bare metal applications which do not require an OS, you can try to use Rust to gain in security and develop your own flexible and optimized PLC that suits your needs and not the bank account of big instrumentation companies.


Stairway to Heaven

September 30, 2016

This was this week’s hype in the aerospace industry: Elon Musk presented his vision for reaching Mars and beyond. There are countless analyses and reviews of his presentation (here for instance for a technical one) weighting either in favor or against Musk.

spacex-interplanetary-transport-europa.jpg

The ITS on Europe. Credit: SpaceX

I got several times the question by friends, colleagues about what I , as an ex space propulsion engineer, was thinking about the feasibility of this vision. The bare answer is: I don’t know. I mean, there is not enough information in this presentation to evaluate the feasibility of the Interplanetary Transport System (ITS). I wonder how pundits can get an opinion on that. There has been countless Mars mission design proposals in the past. This one is not really different. It is both credible and far-fetched because written on the same model: you assess the requirements (in terms of costs, mission duration, target), you take the existing technology (to be credible) and you extrapolate it to meet the requirements (and it may look far-fetched or not, whether you are part of the proposal team or not). So basically, here, SpaceX develops the cost model to have almost routine trips to Mars (very cheap but it is a target – it makes sense to have something cheap if you want to “democratize” space); it takes the existing technology, a bit improved (the Raptor), the reusable launcher (complete reusability instead of only first stage) and it extrapolates the system (increase the number of engines, huge composite tanks,…) to be able to have a cheap transport. This is what was presented. There is no new concept, no really new technology.

So, how can you assess the feasibility of the mission? You cannot because there are missing data on the critical part: the execution. And in the space industry, the execution is the key from failure to success. What methods do they want to apply? How do they want to adapt their organization, their team, to meet the challenges? What new tools will they use to transfer this concept in reality?

If you think of it, SpaceX has not invented new technologies or radically new concepts of missions. They have taken existing ideas that other private companies have also taken (vertical landing – McDonnell and Blue Origin, space capsule with Orbital). I assume that NASA played an important role for the transfer of technology towards private company and that they didn’t need a huge effort of research and development. But what Musk did and this is a huge change, was to set up a modern organization managing both the system and the underlying technologies (propulsion, GNC, actuators), something that the Big Players like Boeing or EADS didn’t bother to do because technology is low-level. Adding to that modern IT tools to automate the manufacturing and production, it was possible for a relatively small team to develop and optimize in a very efficient way the construction of a new, partly reusable launcher and the associated space capsule. In the case of the Mars mission, there is no indication of what they will do in terms of organization, of how they will scale their methods to accomplish this challenge. For instance, they showed this big composite tank. Nice but how did they build it? The difficulty is to create an industrial robot which is able to loom that for big series while respecting the tolerances required. No word about that. Yet, this is where the feasibility of the project can be assessed. But this is also the heart of SpaceX. I understand that Musk does not want to reveals his trumps.

So, what about this presentation? What is the purpose of it if it is not to present the technical details of the project? In my opinion, there are two goals, one external, one internal.

Externally, you have to create the proper spirit for this kind of expensive endeavor. So this is a classical strategy when you want to sell a project where you know in advance that people are not convinced or concerned: you show far in advanced the most  advanced and incredible version of your project; the first time, people will say he is crazy; the second time, they will say no, the third time: “mmmm”, the fourth time: “why not…” and so on until they completely change their mind and say: “let’s go” and sign the check. People need time to get used to a crazy idea. Very probably, you will not get what you asked for at the beginning, but a limited version which will correspond to what you actually wanted. This is a very effective long-term strategy to fund new experiments. I can completely imagine that it is what Musk wanted to do. People will start to think and rethink and rethink. When the negotiations for the funding will arrive, the ground will be ready and people will be used to the idea. Probably, creating a new civilization on Mars is not really his ultimate dream (on Mars really? why not in Siberia? Or in North Dakota – I am kidding I love North Dakota). If he manages to get a first crew there under the flag of SpaceX, he will have written his mark in the sand of history.  Anyway, his rhetoric must revolve around the idea of colonization and not of exploration to avoid the major counter-argument of manned spaceflights: the robots! If he wants to send people to explore, his opponents will want to send probes which are probably more efficient for this work. But if he wants to create an interplanetary species, there is nothing to oppose: you touch the heart of mankind as a group of settlers.

Internally, the goal is easier to understand: to create the right spirit at work. You do not work on a rocket that sends communication satellites for whichever investment fund. You are working on an interplanetary crewed spaceship. This makes a huge difference. You are part of the conquest of space. In these conditions you can work 24/24 8 days a week.

To conclude, the presentation makes sense in terms of communication strategy, less in terms of feasibility of the concept. If you are not an insider, you have to believe or not. As an outsider, I believe my instincts and my centers of interest: I find chemical propulsion a bit boring 🙂 I admire these massive and loud engines like these old steam locomotives; they are jewels of engineering. But I am more attracted by electric system and other more exotic phenomena. I believe (! I have no way to demonstrate it yet) that there is a huge amount of energy to tap in and that the proper way to engineer them still has to be found. In addition, with cheaper and cheaper earth to orbit transports, it becomes to test riskier technologies. This will be a funny time!

 


Look to windward

September 27, 2016

I have always been fascinated by the title of this novel by Iain M. Banks, even though I have never really understood the true meaning of it in the story. Whatever, I have this expression in mind now that I am trying to build a team for a project of mine.

And the wind you feel when you are selling your project to potential teammates is not a light breeze grazing the hair, it is a violent hurricane breaking each part your body. Looking to windward is painful.

The project is, in my opinion, not bad: there is a good idea, a potential market and possible long-term developments. And I kept its objectives reasonable and achievable with a modest amount of funding to start with. The technical challenge of developing the software is limited as well.  Thus, it is a nice medium-sized project with a vision, a well-formed pitch, technical feasibility and potential to reach a market.

Yet, what hell it is to find people ready to participate to it. I did not ask for 100%, no, it is not necessary. It can be a side-project at the beginning. But, I get during the discussions all the risks and possible imaginable failures, I am explained the hard competition, the difficulty to get funded, the bugs, the security leaks and other trouble a code can offer. I do not even imagine the reaction if I would propose to start a new SpaceX 🙂

It is incredible how people can be pessimistic. Is it because they care of you and want to prevent you from suffering? Is it animal instinct to escape the danger? I do not know, maybe a mix of these. No wonder that successful entrepreneurs deploy a reality distortion field: it is the only way to deal with the surrounding negativity.

The positive aspect is that you learn to polish the presentation of your project and to improve the counter-arguments. The negative aspect is that I still have not found a soul to share this project with.


The art of science communication

September 20, 2016

If only science was a game only between you and the nature! Alas, it is not simple, our environment is far too complicated to be understood by an individual. Even if the lonely genius Einstein myth persists, the reality is that science, whatever its domain of application, is an endeavor at the scale of humanity. A problem can be address only through cooperation, discussions, disputes. Consequently, the talent of the scientist resides as well in its communicating capabilities as in theoretical and experimental proficiency.

I came to dig a bit more about this topic while reading this article highlighting the need for a simplification of scientific communication. I agree that there is a problem of communication in science, but it may not be due to only the elitist style. If we want to better understand the issue, we have to consider the different types and levels of communication that the scientist has to deal with. The frontier between the different types is rather blurry and depends on the targeted audience and the purpose of the communication. But we can distinguish the following levels.

The first level of communication is the routine communication with his teammates, people working on the same topic and who aim at solving the same scientific problems. It is a highly specialized discussion where use of jargon is recommended to keep a high level of accuracy and avoid misunderstandings. The communication is in this case a mixture of equation writing, drawing, exchange of code and rational discussion. This is a difficult exercise because it is absolutely necessary to be sure that the participants to the discussion will share at the end the same understanding of the problem and of the possible solutions. From experience, a lot of time is lost because of misunderstandings. It is also difficult because the scientist often think that discussion with colleagues is a loss of time at the expense of pure individual thinking.

The second level of communication is the publication: it can be a report, an article, a digital notebook. The purpose here is to communicate in detail the method, the results, the analysis and the conclusions of the work so that your peers can try to reproduce, to falsify, to confirm or to improve your work. Therefore, it has to be clear, accurate and complete. This level is typically what is expected from a scientist. There is a lot of discussion ongoing on the problems of reproducibility, of peer reviewing and of journals impact factors but this is a little bit different story.

The third level of communication is the oral presentation. The purpose here is to attract the attention of the scientific community on your work, either to get collaboration, help, contradiction, funding.  An oral presentation is, by definition, limited in time and thus can focus only on a limited number of points. Therefore it cannot address technicalities. The communication has to highlight some key ideas, it has to activate some triggers in the audience to motivate them to look at your work in more detail (through communication of the second and first level). Honestly, given what I see during conferences this is an exercise which is, most of the time, poorly done. Slides overloaded with plots and texts, no coherent structure, no context explained, no vision. I suspect that most scientists fear that they cannot use storytelling and simple slides without being criticized for lack of rigor. There is a balance to find. A presentation, even a scientific one, has to be compelling.

The last level of communication is the communication with the public. Void. Blank. This is the ultimate difficult exercise. The hell on earth. And it has become worse in the last years. Before, the main contact with the public was through the media and the journalists and only some chosen distinguished scientists were allowed to talk to the journalists. So the difficult exercise of explaining science to a broad audience was to the charge of the journalist. Difficult because you have to find the compromise between the accuracy of the facts and the interest of the public. We touch here the heart of the problem: the scientific method (but not the results!) is fundamentally not attracting. By definition, it is rational and not emotional. Most people expect emotion. There can only be a conflict when we want to communicate about science. Anyway, with the development of Internet and o the social networks, the separation between the public and the scientists has faded out. We are now in position to talk face to face with the audience. And the audience expects a communication with the scientists, it expects him to play a social role, even political one when it tackles the topic of climate evolution or bio-technologies. This is a role for which the scientist is almost not prepared. The difficulty is even greater now that the society faces a problem with the facts. The exact reason for this phenomenon is unclear: the explosion of data, the increased complexity and hyper specialization of science, the degraded education. Whatever it is, people tend to pay less and less attention to facts, data and rational discourse (if you want some proof, listen to some well known politicians; a more in-depth discussion is to be found in Rhys Taylor’s blog). So the scientist is expected to speak out but the type of communication for which he is trained will not be heard. It can only end in a wrong way: either he shows viewgraphs on TV or he will moan “trust me!” (which is the worst thing to say in science). Honestly, I still have no answer to bring as for the behavior to adopt in this case. This is still an experimental ground. But the scientist must enter this ground and communicate with the audience and find strategies to make his voice loud and clear so that the public gets interested in science again.


The philosophical physicist

August 12, 2016

I could have called this post “The war between science and philosophy” or “the zero sum game” but I found it too childish to tackle a subject which is important for the future of physics. There was a recent update in the “discussion” of the role of philosophy in science. Massimo Pugliucci, Sabine Hossenfelder, to take the most recent insightful articles, took position on the claim that “philosophy is not useful to do physics”. As a baseline physicist (i.e. not one working on the fundamental questions of the universe), I have to react and say why I need philosophy. First, please excuse in advance my lack of clarity and of accuracy: I do not have the experience and talent of most participants to this debate. Yet I hope to convey enough of my message to make it useful.

I would first like to cut short one objection : that I am not a theoretical physicist working on “advanced subjects” like string theories, quantum loop gravity and thus I am not entitled to discuss this kind of fundamental issues. Indeed, I am a plasma physicist; I try to understand the phenomena occurring in a plasma, how it is produced, how it reacts to some stimuli. The most “advanced” tool that I use is Quantum Field Theory to calculate some in  the measurement of the plasma electric field in a magnetized plasma through Stark effect. Beyond that, I follow what happens in theoretical physics (I do not like this term because it implies a fundamental separation between experiment and theory) and I enjoy what I am able to grasp of the beauty of the constructions (as I enjoy the glimpse at the category theory or at the harmonic forms) but I have no practical experience there. Yet, I think that the reflection occurring at the level of theoretical physics affects the whole physics, whatever the domain, otherwise it would be a strong, if not deadly, blow at its coherence.

To address now the core of my ideas: as a physicist, philosophy is useful for me at two levels: first, at a practical level, because I am an human and not a pure rational machine and it is sometimes difficult to bridge the gap between the human part and the physicist part. Second, at a theoretical level, because the goal of a physicist, more generally of a scientist, is to understand the world as a whole and, unfortunately science fails at some point. Let’s examine these two points in more detail.

The job as a physicist is to apply the scientific method, which is characterized in the daily life by two characteristics: rationality and falsifiability. You take some assumptions, you derive a model from them and experimental predictions from the model, you do some tests and check if you validate or not the model. If not, you check that your chain of thoughts is rational and if it is, you change the assumption. So, basically, from the assumptions to the test/theory comparison, it is basically algorithms (sorting, pattern matching, tree traversing) in actions , except that for the moment only human brains can deal with the fuzziness of reality and the absence of clear-cut borders to the area of investigation, you can always find new ramifications to other topics and you have to expand your analysis. But computers are progressing fast and taking over a big part of this work.

But what about the assumptions, where are they coming from? By deriving them from other assumptions. Good, you see the problem. So, there is always a moment (or even several) in the day of the physicist, when all scientific methods are exhausted, where he scratches his head with a sigh. What is the practical solution there? he takes height: he tries to establish analogies with other problems, he conceives random or impossible assumptions, he drinks a coffee or goes to the theatre until the inspiration comes back. But the most effective solution is to go to the office of his colleague and discuss. And when the problem is serious (i.e. all scientific ways are exhausted), the discussion is of philosophical nature (even if not with the quality of experienced philosophers): he tries with his colleague to elaborate concepts with words. Who said that words were not accurate enough to do science? They are not as accurate as equations, but their fuzzy nature is of a lot of help when your mind is trapped by the rigidity of the equations. They give you the room to expand the mind and to discuss with your colleagues. How many scientists discuss only with equations? This is not for nothing that it is asked to reduce the number of equations in a presentation: they are a bad tool for discussion and presentations are an invitation to discussion. The philosophical discussion reduces the accuracy of the ideas but gives more flexibility and opens new areas. In this sense it is complementary of the scientific method. By the discussion (with yourself or with your colleagues) you explore new ideas and you establish new assumptions. When you come to an agreement, you apply the scientific method to them and the machine is running again.

This is also where you understand that experimental results are very useful, not only to validate or invalidate a theory, but to discuss: they are as fuzzy,  or even fuzzier, as words: the experimental between two experimental sets of data will never be perfectly linear, you will have some scattering which will invite to discussion: is it really linear? Should we add a bit of non-linearity to the interpretation? New ideas often happen from the discussion of experimental results.

This is why the scientists should be more trained to the philosophical method: this would improve their discussions and give the tools to elaborate concepts more easily before transforming them in scientific models. It will also probably improve the quality of the human relations and remind them that they are not purely rational machines (and maybe prevent some nervous breakdowns).

The second level of interest for philosophy is more fundamental. There is a point where the scientific method does not work when you try to understand the world when you live. Actually, it breaks for most of the daily issues (except if you live in a lab or your name’s Sheldon): your relations with the society, politics or your love affair. You can write a numerical model of your relation and test it. If the test fails, it will not be possible to change the model! Facing this situation, either you just live your life or, if you really want to understand, philosophy is the only possible rational way to approach the problem. This is only what you can do when you meet the absurd, as defined by Albert Camus in the Myth of Sisyphus: the absurd arises when the human need to understand meets the unreasonableness of the world, when “my appetite for the absolute and for unity” meets “the impossibility of reducing this world to a rational and reasonable principle.”. The worst moment for a scientist.

Of course, you can say that, in the end, physics will explain everything (we could discuss that, personally I am not convinced, not with the present tools), we are just limited for the moment by our ignorance. Sure but now is the moment where we live and if we want to avoid too much frustration, we have to use all possible rational tools to quench our thirst of knowledge or, for the least, to deal with the world.

 


About Drupal

July 19, 2016

Our plasma source project involves several teams across Europe. We wanted a centralized source of information remotely accessible. Our idea was to have an intranet where we could store the documentation, the to-do lists, a gallery of pictures and videos. And we needed a solution which fast and easy to deploy. After so quick trade-offs, we chose Drupal, which is based on a classical html/php/MySQL stack.

screen

The big advantage is that, indeed, you get a polished solution very quickly, I mean in few weeks. Everything is controlled through the integrated administrator’s GUI and the online documentation is abundant. Its use is smooth and I had a very low down time.

So if you just want an intranet with standard features, Drupal is really the right solution. Yet, in parallel, we have developed our data processing system Gilgamesh which is based on Jupyter and thus on Tornado in Python. As a result we found ourselves with two systems with different architectures. Of course, they have different purposes but for some applications, it would be interesting that we have bridges between the two systems. For instance, in Gilgamesh, you can make references to papers in Latex style; it would be useful to reference documents which are in the Drupal system. In theory, it should be possible since the document reference is saved in a MySQL database and the document itself in the filesystem. But the architecture is so different that, practically, the interface is a nightmare to develop.

Therefore, in the future, and for the next project, I will avoid Drupal and start any Intranet on a Tornado solution. In this case it will be easier to integrate it in more complex systems like Jupyter.

 


%d bloggers like this: