Dec 09 2021

Predicting Solar Cell Efficiency

Solar cell or photovoltaic technology is now a critically important technology for our civilization. Solar power is now among the most cost-effective power sources we have, and the greenest in terms of carbon efficiency. It can also have a very small footprint depending on where we deploy it. Rooftop solar, for example, essentially has zero footprint in terms of land use. According to one calculation there is enough rooftop space in the world to provide more than the total energy consumption of the world. This approach is not practical, but it shows the potential.

Advances in solar technology are therefore incredibly valuable. The focus has primarily been on improved efficiency, which has about double in the last two decades from around 10% to around 20% for commercial solar panels. The theoretical limit of efficiency for single-layer silicon is 33.16% (the Shockley–Queisser limit). However, we can use multiple layers and other tricks to improve this theoretical limit to 68.7% and light concentration to boost this further to 86.8%. There is therefore a lot of head room above the current efficiency of about 20%. If we could, for example, double solar cell efficiency at the same production cost, that would cut the cost of installing solar in half, or double the potential capacity of rooftop solar. This would pair well with an electric vehicle with at-home charging.

Solar cell research could also reduce the cost of construction, make panels more resilient and flexible, and replace current rare or toxic elements with more common and environmentally friendly elements. There is a lot of room for improvement in this technology, and speeding up research to make those improvements is therefore highly valuable. Researchers at MIT and Google Brain may have just provided the world a tool for doing just that.

Traditional research involves building a solar panel with a specific change and then testing its efficiency and other properties. This is essentially brute-force trial and error. It may be informed by educated guesses about what changes are more likely to result in desirable changes, but these are guesses at best. Now, of course, we have computer simulations that can help with some of that trial and error. It is quicker and cheaper to test a simulated panel in a computer rather than build a physical panel. This has incredibly accelerated research, but can still be relatively slow. First, the simulations take time to run, days to weeks or months depending on their complexity. Complex simulations may require supercomputers that are in high demand, or need to be simplified or run for longer times on desktop computers.

But perhaps the greatest limitation of the simulation approach is that the theoretical space of possible variations on solar cell design are huge. For a multi-layered solar cell researchers can vary the thickness of each layer, the gap between them, and a host of physical properties about each layer including doping with specific elements. Multiplied all together you get a massive number of possible configurations to simulate. But what if we could predict which alterations were more likely to lead to improved solar cell efficiency? That is what the new research from MIT and Google does.

They developed software based on the concept of differential physics.  I could only find highly technical descriptions of what this is, so I will do my best to understand and translate what this is (and any experts out there feel free to chime in). This uses a feedback loop of information that not only determines the effect of making a specific change in a complex system, but then feeds back the information of that change to build a predictive model of how specific changes affect the system.  The result is an increasing ability to predict the effect of making specific changes. Therefore, using the software they developed specifically for research in solar panel design, engineers can use the differential physics software to determine which changes to then run in the simulator, which then determines which cells to build and test physically. This could be a massive time-saver, speeding up solar cell research and getting us closer to those theoretically optimal solar panels much quicker. In their paper they tested the system on perovskite solar cells, actually, (not silicon) but the principle is the same. Perovskite has potentially greater efficiency than silicon but researchers are still trying to tackle the problem of stability – they tend to break down over time, and leak lead into the environment.

The researchers have made their software available to the world as open source code. This means that solar cell engineers can start using this tool for free right now. It also means that programmers around the world can start tinkering with the software to make it better. They recommend using neural network systems to integrate their software with “optimization algorithms”  to achieve “data-efficient optimization and parameter discovery.”

As flawed as humans are, we can be damn clever monkeys. As I mentioned on another recent post, the promise of using computers to accelerate science and technology research is one of the promises of futurism that has absolutely been kept. We may not have flying cars and jetpacks, but we have computer-driven research that is orders of magnitude faster than traditional research. Given all the potential downstream benefits of this – I think this futuristic technology is better even than flying cars.

No responses yet