Jump to content

Model uncertainty for P-value estimation


Recommended Posts

I'm interested in calculating P-75, P-90, and P-99 values for PVsyst estimates.  I looked at the help page on this subject (https://www.pvsyst.com/help/p50_p90evaluations.htm), and found that the only sources of uncertainty discussed in detail are related to input meteo data -- interannual variability, error in satellite data, etc.  These are all things that the user can look up/calculate and apply to the PVsyst output themselves, externally to the PVsyst run.  The part of the uncertainty that we're missing is that related to the PVsyst model itself.  Could you supply some information about that?  Clearly the full uncertainty is related to the quality of the inputs in addition to the uncertainty in model assumptions and algorithms, but it should be possible to estimate the magnitude of all of these effects independently (and I would hope that the uncertainty of the algorithms is checked every time an update is made, or there's no basis for making the update).  In terms of the uncertainty of the inputs, you would just need to translate these uncertainties to the outputs, not estimate how bad people's input values are in the first place.  Any information you could provide would be helpful.  At the moment, IE's are guessing based on "experience" and comparisons to real systems, where errors in measured irradiance, deviation of construction from the engineering design, performance of trackers, etc., come into play.  Thanks!

Link to comment
Share on other sites

Hi !

Sorry for the delay as this is a difficult question. As you point out, there is a complex hierarchy of uncertainties, some of which are well-defined, others less so.
Just zooming out at first, we tend to classify the relevant uncertainties according to the use that you intend to do.

As one goes down in the hierarchy, the order of magnitude of the uncertainties decreases, in general. It is an okay approximation to assume that the uncertainties are compounded with an RMS sum.

The first level would be in the case of yield estimates or PXX evaluations. In this case, the dominant factor is:

  • The year-to-year weather variability (O(5%))

The second level is in the case of comparisons with actual data, when you use measurements, or historical data as input. Here the order of magnitude can vary widely, both for the measurement uncertainty, or the parametrization (e.g., given the experience of the modeler). However, certain studies, e.g., the PVPMC blind modeling, show that this uncertainty can be higher than expected ! All in all, I would summarize these uncertainties as (O(1%)).

  • Measurement uncertainty
  • Parametrization uncertainty

The third level is the case of tracking the differences between two system choices. For example, deciding between two cable sections.

  • Intrinsic uncertainty of the models. This is the main factor, but each model building block has a different uncertainty.
    We do not have an exhaustive answer for all models. However, we estimate the base models to have a very low (negligible) uncertainty (O(0.1%)) on the overall results:
    • One-diode model
    • Transposition model
    • DC to AC conversion
    • Ohmic resistance models
  • Some choices of model are more critical, and can yield larger uncertainties. For example, the electrical shadings partition model, in the context of complex shadings, can have a 0.5% uncertainty because many approximations are being made. However, in a context of regular rows it has very low uncertainty. Another example is the central tracker approximation for diffuse shadings, or the sub-hourly clipping effect, which not only yields an uncertainty but also tends to bias the results.
  • Overall, we still need to publish a more exhaustive list (honestly, it is a bit of a daunting task).


To finish the discussion with a comment more on point on your question, I think that for the purpose of the PXX evaluation, you can entirely neglect the modeling uncertainty. Since the intrinsic model uncertainty is two hierarchies lower than the weather variability, it is probably masked by the uncertainties above. Some exceptions are some critical choices, such as:

  • the partitions electrical shading model in a context of complex shadings,
  • diffuse shading representative tracker approximation in a context with tracker patches,
  • the sub-hourly clipping effect,
  • the bifacial model,
  • any other situation that requires some important approximation that is not adapted to the case studied.
Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
  • Create New...