Jupyter in real life – Part 3: return on experience

July 5, 2016

I have presented in the previous part the design of our data processing platform. The launch of the application was progressive with at the beginning only two beta testers; now I have eight regular users and I plan a maximum of 15 participants (please remember that the platform was initially designed for a small team). So I have now a bit of experience with running a multi-users Jupyter system and learnt of the advantages and issues related to the method. This is what I want to present now.

Technical choices

I am still hesitating about two choices I have made for the processing library: HDF5 (via h5py) and Pandas. I am not sure if they bring more advantages or more drawbacks.

  • For H5py (but it is basically the same for PyTables): it provides a clean API to save in a hierarchical way your raw data. Your data come from the diagnostics and you can put in nicely prepared groups, subgroups and metadata. As far as I understand,HDF5 is supposed to deal with huge files: you are supposed to put all your experimental data in the same file; it is conceived as a replacement of the traditional directory tree of your filesystem. I didn’t do that because my natural instinct fears big files and what happens to them if they get corrupted. And some of them have already been corrupted: [so it happens] (http://cyrille.rossant.net/moving-away-hdf5/). By writing a file for each experiment, I lose the advantage of manipulating in one block the metadata of each experiment. Let’s say that I want to compare the maximum magnetic field from experiment to experiment; I have to open each file, read the magnetic field, close the file, open the next one and so on. With one single file, I would have simply iterated on all groups. To circumvent this problem, I have established a parallel database that gathers all metadata. It is far from being the ideal solution; when I change metadata, I need to do the writing operation in double: once in the hdf5 file and once in the database. Another issue with hdf5 is that it is ideal for frozen data structure: you get raw data and you “freeze” them in a hdf5 file. But as soon as you want to modify these data (for example, you want to add level-1 processed data), it becomes to be unclean. And to finish, the API is not suited for concurrent writing: I need to impose one administrator who is the only one allowed to write in it. OK, again, for raw data it is not a problem, but as soon as you want people to add processed data to these files, it becomes just painful. I have no ideal solution to these issues. Looking around, the general solution is based on the standard filesystem. I am still not sure this is the right way either, especially to manage metadata associated with each signal.

  • For Pandas, I am also in doubt. It is really powerful to aggregate data (you need one single line to get the average and standard deviation or other attributes of a time series and display it for several experiments). But there are many cases where you have to reverse to numpy arrays and it adds long expressions in your python code. Moreover, access to a single point in a dataframe also requires a circonvoluted style.

There is also a more fundamental point: how to manage the API. I took the obvious solution to put the API (all the functions specific to our experiments like plasma models) on the server where the IPython kernels are running. So each kernel has access to this API. Main advantages: it is centralized and all changes are reflected to the users immediately; you know that these users all have the same models and the same functions. But the solution comes also with drawbacks: this is research: the models evolve quickly: the underlying functions have to follow these changes. But an API has to be stable, otherwise it is not usable. How do you sole these opposite constraints? I have no clear-cut answer: sometimes I have to change the functions and the associated parameters and it breaks the existing notebooks. Sometimes, I create new functions. But it is not very clean. In addition, the access to the content of the API, the source code is not easy; you can use a magic command to do that, but it doesn’T give you a very nice display. A more beautiful idea, which I am implementing, is to use the notebooks as the support for the APIs. Basically, you write all your APIs functions in a set of notebooks (with the great advantage that you add text and pictures or whatever necessary to explain your code and your models) and you put these notebooks in the central repository. Now you can create a notebook, and instead of loading a python code with the import, you load the API notebooks like a module. You can even affect version numbers to the APInotebooks, so that you can keep the compatibility when your API is evolving: you just have to call the right version of the API. You can also copy an API notebook, modify it to add some functionalities and, when these changes are validated, you can share it with others on the central repository. One step further is to use these API notebooks to provide web services.

Usability

Jupyter in teamwork is great: you write a notebook, you transfer it to your teammate, he can execute it just as it is: you have the same data, the same API; he can exactly do what you did and correct or improve your work. The principle of narrative computing is also very helpful: you can comment, explain with images, figures, whatever you need to your team. This really improves the communication and debugging of problems in code but also in physics models. In addition, the seaborn module really brings a decisive visual gain over classical tools. There is a big way for improvement and, in my opinion, the future is really bright provided that we bring these improvements to life. I will talk about them at the end. But even when the solution you propose clearly brings big advantages, it is not enough to make it available to the users without a strong advertising and a strong technical support. In all cases it requires time to impose it as the reference choice to do data processing. In the first days, the most used function was ‘export’ which makes it possible to transfer data to other tools like matlab. Several actions are necessary to reverse the trend: to propose notebook tutorials, an in-depth documentation and in-person training. You choose first the early adopters, the users who are ready to test new products (and there are not so many of them), you run together through some examples, you make some comparison with his previous codes and progressively push him to stick with your solution.

Other good points are the widgets and the dashboard extension: you can add an interactive part to your notebook, which simplifies like in several situation. Many widgets are available, you can adapt them to your needs or create new ones. Once you have working examples, it is rather straightforward to make a new one (it is more difficult to make a nice one! Frontend physicits are welcome). So you can publish an overview of your last experiment on the big screen with all important parameters; or you can display a list of experiments and select the one where you can get the plot of the main parameters. This is really useful. The layout possibilities are for the moment lacking a bit of flexibility; maybe I do not use it in the best way or the code is still in its infancy. But it can only become better (but [some will say] (https://www.linkedin.com/pulse/comprehensive-comparison-jupyter-vs-zeppelin-hoc-q-phan-mba-) that it will be difficult because of the old technologies used; old meaning here not [angular.js] (https://angularjs.org/)). In this sense, you can have a look at JupyterLab, which could be the future version of Jupyter: the frontend is entirely rebuilt from scratch based on TypeScript and PhosphorJS, which gives a cleaner code, and an awesome desktop-like application UI.

But let’s go back to the present version: at one point, you will get on your account plenty of notebooks, some in classical narrative fashion, other with the dashboard aspect. And here we reach a present limitation of Jupyter: the management of notebook in the tree dashboard is awful: you can duplicate and delete, that’s it. Normally, Jupyter notebooks are stored on the local filesystem and the user can manipulate all his data with the native file explorer. But in our case, with a database filesystem, it is not possible: Jupyter has to integrate a full-fledged file manager. JupyterLab will have it but in the meanwhile, the maintenance of proper shared set of notebooks is difficult.

Future step

I am really satisfied by the result and how Jupyter, with a central data API, really improves the research workflow. I see one direction of long-term improvement which can radically change the way to do experiments. For the moment, Jupyter is used only to process the data. The configuration of the experiment and the setup of the experiment is done on a dedicated software (in our case Siemens WinCC) through a graphical interface which is our interface to the hardware (a Simatic). Now imagine that you can install and develop a kernel for your signal controllers and monitors. Let’s say that you have a rack of Raspberry Pis, Arduinos, RedPitayas with one of them used as a supervisor. You can install a IPython kernel on it with an API which defines the hardware logic (how controllers and diagnostics are interrelated, dogwatchers, control loops and so on, with the RedPitaya you can even have a FPGA part for fast processing^) and offers a set of commands to access this hardware with a given configuration. This kernel can be accessed from a Jupyter with a notebook, thus offering large possibilities: the most classical one would be to write ipywidgets to get back the usual GUI with knobs and displays. But we can imagine more interesting solutions: instead of writing on a paper your experimental protocol and entering the corresponding program in the interface, you can create code to let the computer establish itself the experimental sequence. Let’s take a concrete example: we want to see how the plasma density is evolving in function of the operating parameters (power, magnetic field, pressure). We can define by hand the series of tests and the way each parameter will evolve. It is not straightforward because the effect of the operating parameters depends on how you make them evolve during the test. So, you have to check in the previous experiments how they correlate and establish which sequences are the best (ramp in power first, then ramp in magnetic field, then gas injection for instance). Now, since you have both the data, the controller and the computing power available in your notebook you can try to automate the sequence: you train your neural network on the previous sets of data to highlight the interesting patterns for your objective and then you apply this pattern to the next discharges. If you get the results you want, good; otherwise, you use these new results to improve the controller. Yes, you are in a closed loop with the computer having access both to the inputs and the outputs, the ideal case for machine learning. And experimentalists were thinking that their job would never be threatened by machines!

Advertisements

Jupyter in real life – Part 2: design

July 5, 2016

I have explained in the first part the reason why I chose a Jupyter-based system; in few words: maintenance, human/data interface, python. I will now give some details on the design of the application. A prototype can be find in my github but be careful: this is still a proof-of-concept, yet a working one, that I and my teammates are using (and debugging) but still in an early stage without the polished completeness of a production-graded application. Therefore, my purpose is not here to “sell” a product that can be downloaded for immediate use but to explain the method and, maybe, encourage others to develop their own application.

The application, which is officially called Gilgamesh, is made of three components:

Gilgamesh Server

It is a personal version of Jupyter Hub, which basically enables to use Jupyter in the cloud: you connect to a login page with the web browser and you can start a personal instance of Jupyter with the dashboard as a front page. I say that this version is personal because I have rewritten the code almost from scratch using only the main mechanism (reversed-proxy/spawner) and leaving aside all what makes Jupyter Hub battle-hardened. The reason was twofold: I needed to use Jupyter Hub with Windows (and the standard version cannot because of the way process IDs are managed by Windows) and, above all, I wanted to understand how it worked. I didn’t recode all the safety systems because I didn’t need them for the proof of concept: if one process idles, I can reboot the Hub: the number of users is limited (ten) and won’t be disturbed too much by few seconds of waiting. Another reason why it is personal is that I have added some services to the Hub. Actually, you can easily add services to Jupyter Hub by using “hooks”, which are kind of access ports for external codes. But when I started, the mechanism was not clear for me and it was easier to add the services directly in the Tornado code. The main service that I have added is a centrale repository where users can push and pull their notebooks from and to their account. This is easily done because I use for storing the notebooks, not the local filesystem but a PostGreSQL database using the PGContents extension from Quantopian. The other service is the bibliography: there is a bibtex file with all useful articles, books and other documents which can be displayed in a HTML page (with the BibtexParser module and the JINJA2 template) and which can be referenced in a notebook with a small javascript extension that I have added and that converts every \citep[xxxx2016] in a hyperlink to the content of the corresponding document (a la Latex).

Jupyter Dashboard Extension

Gilgamesh

It is the Python Library that provides access to the data and to the physics models. This part is deeply dependent on the structure of the diagnostics that we have, which makes it not easily exportable for other projects in the present configuration. Yet there are several patterns that can easily be generalized. My present work is to separate this general logic from the details of the implementation of our diagnostics. The objective of the library is to give the user a high-level access to the data, without thinking of how the data are hard-wired to the captor and to give him the power of data processing libraries like pandas, sk-learn and friends. One difficulty with the high level access is to provide a seamless interface to data which are permanently changing from experiment to experiment: diagnostics can be changed, recalibrated, disconnected, reconnected, new components can be added to the testbed, and so on. It is painful for the user to keep track of all changes, especially if you are not on location. So, the idea is that the library take cares of all the details: if the user wants the current signal from the Langmuir probe, he just has to type ‘Langmuir_I’ and he will get it: the library would have found for the request experiment on which port it was connected and which calibration was applied to the raw signal. This is one step to the high level approach and it is related to the ‘Signal’ approach: you call a signal by its name and then you plot it, you check its quality, your process it. Another approach, which is complementary, is to make the signals aware of their environment; it is the ‘Machine’ approach. The testbed and its components, especially the diagnostics are modelled in Python by classes (in a tree-like hierarchy). A given diagnostic has its own class with its name, its properties (position, surface,…), its collection of signals and its methods which represent its internal physics model. Let’s take an example with again a Langmuir probe: instead of calling the signal ‘Langmuir_I’ and the signal ‘Langmuir_V’ and process them to extract the density, you just call the method Langmuir.density() and the object will do all the hard work for you. So the library makes it possible for the user to choose between the ‘signal’ approach for basic processing of data and the ‘machine’ approach to activate the heavy physics machinery to interpret at a higher level these data.

Gilgamesh Manager

This is the more classical part: a standalone, GUI-based application to manage the data. I added it as a safety net: I was not sure at the beginning how easy it would be to use the notebooks to manage the data. So I used Qt-Designer to develop this graphical layer to the Gilgamesh Library. I am not sure that I will keep this component in the future. The development of the ipywidgets is fast and makes it possible to develop some advanced interactive tools directly in the notebook. If you combine that with the Dashboards extension, you practically get the equivalent of a native application in the browser. OK, I exaggerate a bit, because it is not yet as fast and the interactive manipulation of data (like with pyqtgraph that I use in the Manager) is not as efficient but these tools are progressing quickly and I can see a total replacement in the near future. But even now, I have a notebook “Dashboard” that displays the overview of the results of the last discharge on the big screen of the control room and it is, I must say, convincing.

Jupyter Dashboard Extension

This is it: the tour of the design choices for this Jupyter-based data processing system comes to the end. Next time, I will give some return on experience on the development and operation of it. After that, we will have a look at some examples of each component.


Jupyter in real life – Part 1: specs

July 5, 2016

Jupyter is the reference in terms of notebooks. Its principle of narrative computing offers many advantages, but the most common application is related to education (see for instance this list of notebooks which are mainly tutorials). The ability to follow step by step a calculation, and to do it by ourself is, of course, already a big help to understand a subject. Yet, I am convinced that the notebook, and the evolution that it is presently following, can also play an active role in research and production. I want to share in a series of posts one particular application of notebooks with the concrete example of our testbed, with the hope that it can convince other people to use it or to tell their own experience in the area of research.

In this first post, I will explain why I have chosen Jupyter over more classical methods for data sharing and processing.

The need for a data platform

I run a middle-sized experiment worth of several hundreds of thousands of euros which aims at producing a helicon plasma and at analyzing its interactions with radio-frequency waves.

Ishtar Testbed

Despite its limited size, the experiment involves several teams distributed over several countries in Europe and plans to extend to cooperation on other continents. The idea is to have a shared experimental platform accessible for whoever wants to carry out measurements on this kind of plasma source, with a friendly plug&play interface for diagnostics and easy access to the data. In brief, this should be a 21st century way to do “cloud experimenting” within a modest budget. In a less emphatic and more concrete tone, my need was the following: all data transit through Labview (I will explain -but not defend- this choice on another occasion; in short: time constraint). They come raw; I want to apply all calibrations, meta-data stamping and make them accessible on another more flexible and cost-effective system. In addition, I would like that the users have access to the configuration of the testbed, so that they know what kind of hardware was present when these data were acquired.

Distribute data, but in a meaningful way

My main concern was to make the data available to the distributed team. My first idea was rather classical with the development of a data server: basically, the data are stored on a computer with, for instance, a http server and each user can connect either through a web browser or through a dedicated client to this server to display the list of experiments and the associated data and download them. Since I wanted to use Python anyway (because it is, in my opinion, the best suited language for this kind swiss-knife operations of data and metadata manipulations), I was thinking of implementing a tornado server like the HDF server. It could have been thought as an extension of our present intranet but the implementation would have been difficult since this intranet runs on apache-php-drupal (it was a fast and efficient solution but not the most appropriate on the long term but this is another story) or as standalone. Another solution could have been to use something like Tango which is used on big experiments likeSardana but since we already had our own control system, it would have been overkill. So the server version seemed the most suited to our requirements.

Not the obvious choice

Yet I was not convinced by this choice for several reasons:

  • Each experiment contains several GB of data and we can have up to 100 experiments each day. Not all data are relevant but we are still at a stage where we don’t know exactly how to clean them. This means that people will tend to download a huge amount of data, just to process a small part of them; I did not have enough bandwidth to support useless data transfer; I wanted a more economical way to deal with data,
  • if a dedicated client is used, it is faster than a web browser but we would have to update it with each evolution of the database and to be sure that each client computer has the right versions for python and the different modules. In a collaboration where people come and leave often, it could become very time-consuming to check that every user is equipped with the proper tools; so I wanted a solution where I can centralize the maintenance,
  • The fact that the team is physically distributed means that everybody will work out the data his own way, with his own tools and his own models. So, in addition to sharing the data, we would have to develop and install tools to share the numerical tools, the physics models, to improve the communications. This is what is done in most collaborations, but it is probably not optimized and there is room for enhancement; I wanted here to try new solutions,
  • Finally, I am very convinced that notebooks are the future of doing data processing and computing. Its brings a huge improvement in human/computer interface, with a nice, easy way to explain what you are doing in your calculations, or how to use the data. It is particularly useful for a collaboration with temporary members (students, short-term participants). They can follow your steps and understand how to process the data with a very smooth learning curve. In addition, you shorten the path between the retrieval of data, their processing and analysis and the publication. All in all, notebooks maximize your time dedicated to the creative part. I wanted to use this killer feature and see it working in real conditions.

This is why I decided to give a try to the Jupyter-based solution. It opens many interesting perspectives, even though some hurdles still need to be overcome. This will be the subject of the next post where I will detail the design choices of this solution, with a more emphasis on the code.


The Pelican Experiment: the end

July 5, 2016

Sometimes I need to be pragmatic even if it means that I have to give up a project where I have invested a lot.

I have tried to move the blog to Pelican. I have explained my main reasons here. After several weeks, I must admit that it was not a good idea. Not that Pelican is a bad product, on the contrary, it is a nerdy fun to play with; but it is not suited to my needs and configuration. It is better for people with a single development computer who regularly write on their blog and make a vitrine of it. My blog is more a kind of public area where I put my ideas, links and projects, a way to express my thought. I don’t want to compile, commit and push each time I need to put some words online. In addition, I was bothered by the absence of interactivity.

So, basically, I come back to WordPress. I will move back the few articles I have written there.


A glimpse in the future of scientific publishing

February 19, 2016

You have certainly heard about the discovery of the gravitational waves. There is of course the press coverage offering the big picture to a wide audience. Maybe you were curious enough to dig a bit more in the details of the measurement. The natural way is to read the official scientific publication under the form of an article in the Physical Review Letters. Usually four pages long (here a bit more, probably due to the importance of the discovery), rigorously written, based on numerous references, in a very concise but accurate style. Briefly, this is the summum of publication in physics.

But this time, the authors have released their results in another form: a Jupyter notebook. You can find the result here. Do you see the difference? Yes, it is interactive. You can just follow what the authors wrote, but you can also modify the code, do tests, torture the data. Compare both publications and try to see where you understand better. What is interesting is that you do not see only a finite product but also a big part of the process to reach it. It is a big progress on the road to reproducibility and a great instrument to learn a topic (did you notice that you have sounds included in the notebook?).

Of course, this is a first step: several aspects can be improved but Jupyter offers already a vast potential to enrich the publication process. But all the same imagine one big potential: Jupyter is based on a three tiers architecture: a kernel to do the calculations, a frontend in html/javascript (with a backend.js to insure the communication with the kernel) and a server based on Tornado (with ZeroMQ for the communication with the kernel). In the present example, the LIGO team released only a notebook which is self contained: it needs only standard libraries and can run with python kernels available with the Jupyter distribution.  The data are independently downloaded through a server. Now imagine that the team gives access, in addition to the notebook, to a kernel equipped with their specialized libraries and all the tools they use for refined analysis; a kernel which has directly access to the data, all the data, not only the good ones. So the notebook, instead of connecting to the local kernel, connects to the remote kernel. In this case, and if you have the required experience, you are able to work almost in the same conditions than the discovery team.

One step further, imagine that you can comment below each cell, compare your modifications with the ones brought by other persons on their own version of the notebook, you will increase and (if the noise controls are good) improve the level of discussion on each paper. These extensions could be based on an extension of Jupyter Hub, which manages from one single point the access to notebooks for a group of user.

One last step in the future and you imagine that the kernel, instead of being a coding language, is a machine: a raspberry pi, an arduino, a digitizer, the controller of an experiment. Instead of programming code, you program remotely from your notebook. It is a bit far fetched for the moment, but imagine that you can enter your own program of observation for LIGO directly from the notebook. The kernel takes in charge of queuing the requests from all the notebooks, the experimental team prioritizes these requests and you get notified when the results you asked for are available. Science from your bed!

This is the kind of stuff that Jupyter starts to make possible. And don’t be surprised, the futures comes very fast.


Scratching the surface of coding

February 9, 2016

I am playing with my children with Scratch, the famous graphical intuitive programming language from MIT and I must say that I am impressed. First, impressed by the quality of the interface and the capabilities of the language. Second, impressed by how fast it helps children to learn the programming patterns of visual, event-driven languages. I mean, when you are able to code in Scratch, you are basically able to code in Labview. It is why hardware manufacturer starts to use similar types of language to set up their hardware (see for instance the redpitaya).

What makes attractive, in addition to its intuitive approach of programming structures like events and loops, is its library of sprites, backgrounds and sounds, which avoids to create them and to directly focus on their interaction.

OK, it is a language for learning and it may be painful in the end to move icons to create mathematical expressions. But, again, it does not disturb thousands of Labview developers who do that on far more complex architectures. I am not a big fan of these graphical methods for professional development (try to check your code when you do not have a 20 inches screen with Labview installed) but it is totally funny to help children to learn. So, I recommend it warmly.

It is available on the Raspberry Pi with direct manipulation of the GPIO signals, which opens the access to the real, physical world. And it is really fascinating to see how the children (and I) enjoy to see the LED blink when the ball hit the wall in our improvised Pong. However, the interface is not yet polished: the signals are accessed through broadcasted messages and have complicated names (try to explain to a non english-speaking child what gpio10off means!). But I am pretty sure that it is a question of months before we get an interface naturally integrated in Scratch.

To conclude, if you want to become a Labview developer and do not know how to code yet, start with Scratch.


The long void

January 16, 2016

It has been a long time since my last post. I am trying to approach the keyboard again and to put one word after the other to start again some writing activities.

In the long void of this blog, I got busy starting to study some new topics in the research area and to expand my toolbox. My purpose is to share here in the future posts some of these new areas of interest.

I give a short list of them right now with some comments and useful links for those interested to have a look at them.

In physics:

  • Helicon plasmas: I am still working on this kind of plasma generator using a helicon antenna and magnetic coils. They have the advantage to create high density and homogeneous plasmas in a compact volume. The problem is that the exact mechanism by which the electromagnetic wave ionizes the gas is not understood. The masters of this technique are Chen and Boswell. One well known application if for the generation of the plasma in the VASIMR engine.
  • Nuclear Magnetic Resonance in plasmas: it is here a very raw idea which needs a lot of work before having even a proof of concept. NMR is classically used in fluids and solids but not in plasmas where the density, several orders of magnitude lower, prevents to measure a clear signal. One idea  would be to use a fully polarized gas (while there is always only a small percentage of polarized ions in the human body for instance). More details on the existing ideas in the paper by.
  • Techniques to measure RF and DC electric fields in the sheath of a plasma in presence of a magnetic field. This is the topic given to one of my PhD students. I will not go today  into the intricacies of why we want to measure it but the main method involve the Stark effect and the Stark mixing. So back to the classics of quantum mechanics.

As for the toolbox, I have embraced two types of tools that I will have great difficulties to abandon now:

  • The Jupyter (formerly IPython) infrastructure: I say infrastructure and not notebook because it is more than that: the whole underlying machinery is really powerful and  well thought and has not yet started to give the measure of all its  capabilities. I use mainly a self-made version of Jupyter Hub (Windows compatible with own file manager) to my team access  through the browser to the notebooks and the Python kernels. The big advantage is that access to the experimental database and all associated processing libraries is  done through the notebook. Display of the data is also done over the notebook through javascript extensions. So the end user does not have to install any software or to manage libraries compatibility: all is done in one location on the side of the server. In addition, I can change the structure of the database as often as I want, if I keep the same access interface, there is no change for the user. The other big advantage is the (relatively) smoothless transition from data analysis to publishing: you can do almost everything at the same location and you keep the traceability of the data plotted in your articles. Since the whole team has access to the notebooks, they can clone them, comment and improve them.
  • Open hardware: in spite of all articles on the topic and the success of the Raspberry Pi and other Arduino, we still underestimate the potential of having cheap, fully documented hardware, and especially the potential in science. Because of the way science is funded, experiments increase in size and decrease in number at the expense of small and middle-sized experiments. Open hardware clearly makes it possible to achieve operations that were until now reserved to expensive hardware supported by expensive software. I have played with the Redpitaya, which is basically a Zynq 7000 SoC with Radio-Frequency ADC and DAC and whatwe can do is really incredible. Not only you can acquire or emit signals up to 150MHz but in addition, you have a small FPGA to implement your realtime operations. I would advise you to have a look at the Git from Pavel Demin to see all what we can do with it. If, in addition, you integrate this kind of board in your own internet of things, you greatly simplify your flow of data: the board can be controlled directly from jupyter (well, notebook, dashboard, control board, you start to see why it starts to be interesting), you access all your tools from one location accessible over internet from everywhere (thanks to your tablet or your phone). Imagine to be able to do your experiments from a  beach at the other side of the planet (well practically it is not authorized by the safety authorities and morally it is not accepted by your chief). With the imminent arrival on the market of new technologies like the Hololens, I think that the way to do experimental physics is about to change a lot.

In addition to that, we had some adventures with the Raspberry PI and Scratch which are worth sharing as well, but this is another story.

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 


%d bloggers like this: