-
Posts
836 -
Joined
-
Last visited
Everything posted by dtarin
-
Slow can be used on any size system, but simulations will extend to multiple hours for large, complex sites. The main difference that I see between slow and fast (modeling projects in US) is that fast will have higher near shading and lower electrical loss, and for slow, it is the opposite. Not always, but I would say more often than not. In total, shading losses are typically comparable for most sites with fast being slightly higher. Sometimes difference is negligible, but depends on the site; less complex and utility scale will have closer figures. More complex sites with topography, smaller sites with severe shading, will benefit more from slow simulation and the increased accuracy with calculating electrical loss. As for when to use, it depends on your position, project, etc. For utility scale in development stages, fast makes sense, and then moving to slow (or at least benchmarking slow) later on when design is closer to final, prior to financing, construction, etc. If the project is small and simulation time is short, I dont see a reason not to run slow for single, one-off estimates. But if you're doing batch runs, running multiple designs, evaluating different components or weather files, in early stage and need indicative numbers, etc., fast is fine. What's more important is understanding the differences and impacts to simulation results (irradiance, shading, production, etc.), and deciding when to prioritize simulation speed over accuracy and vice versa (and knowing if there even exists material differences).
-
-
You've probably already checked, but all objects are selected to be shading objects in their properties menu?
-
Batch simulating with different tilt and azimuth will provide more accurate results.
-
This is not applicable to PVsyst. All trackers in the shade scene track at the same angle. Users can force trackers to track at different angles relative to the standard tracking algorithm, but it is still applied globally to the scene. One would need to model each area of interest individually with different forced backtracking angles and combine results. Whether this results in improved production is uncertain.
-
Yes, it is typical to use the MPPT feature to model mixed blocks and good practice. It doesn't necessarily need to be due to different bin classes, loading ratios are also relevant even with a single bin (clipping is non-linear). This is of course dependent on the inverter in question, design, etc. With regards to representation, what's "significant" is dependent on project details and opinion. A mixed bin design on a single mppt will be more impactful due to increased mismatch losses, which are up to the user to define. Modeling this in PVsyst using the mppt sharing feature and creating distinct sub-arrays allows one to specify mismatch individually, and documenting this in reporting is good practice. If all of these differences are averaged out, is there a difference in production? I am not sure, but it would probably take the same amount of time to model individually as it would to determine these average values and model as a single sub-array.
-
The best way would be to do everything outside of PVsyst and create manual reports. However, if you want or need to do it inside PVsyst, there are not many options. One way is to adjust the MV Ohmic loss for that subarray. Set an MV ohmic loss for just that subarray to like 155%. You will need to adjust the wire section size most likely to get the length of wire to a permissible length (there is a character maximum). This example was three symmetric blocks, I didnt fine tune the percentage loss, it is approximately 1/3 loss. Define the ohmic loss as typical for the other subarrays.
-
PVsyst will often (or always?) default to using the highest GCR under the backtracking menu if you have not selected which tracker(s) to use for backtracking, even if there are only one at say 0.411 vs 0.410. In the shade scene, go into the backtracking menu under tools to select the appropriate tracker for backtracking. There are different options, you can select an individual table (at the pitch you want) or automatic.
-
Express the report to second decimal to get clear picture, losses could be near zero and are being rounded.
-
If there are no trees or objects that shade the modules, backtracking is on, and your system is flat, it will not have electrical shading loss.
-
Increase in energy yield with azimuth different than zero
dtarin replied to Rafael Santos's topic in Simulations
No electrical shading loss is showing. If these are bifacial c-Si, you'll want to include electrical effect (modeled according to strings) and then make the comparison. -
Manually move the modules and objects over the image and line things up that way
-
Add an object in the PVC file like a tree line or houses
-
GlobInc = POA
-
Importing meteo file(s) with CLI requires the user to first create a SIT file from PVsyst, which means importing meteo files is not able to be automated for unique sites. If I have to go in and create a site, I might as well just create the meteo file there which creates a SIT file at the same time for known formats. Is PVsyst planning to add the creation of SIT files within CLI? Importing a csv file and defining an .MEF file to locate name, lat, and long (with each row being a unique site) in the file seems straightforward. This could default to using synthetic meteo to complete the SIT file. Or include lat/long selection in the custom import of MET files where we define data fields (known format weather files typically already have lat/long).
-
System Sub-Array and 3D Field Size
dtarin replied to Joe Hollingsworth's topic in Shadings and tracking
It is common practice (I think) to ignore this and I would suggest ignoring this, I'd rather have the full area of the table represented when calculating shading loss, rather than changing the size of the table to match module quantity and using a reduced table area (it's this area that determines shading loss after all, not the quantity of modules). But you can change table size quickly. Determine the lengths associated with each table size according to string size, you can do this by clicking a table, changing the table length according to # of modules, record the length in meters, then go into equipment list, filter for trackers, and filter by length each table type (3/2/1 string), select all of that type and change length. Top picture is before adjust, bottom after, # of modules has decreased after one table swap. -
System Sub-Array and 3D Field Size
dtarin replied to Joe Hollingsworth's topic in Shadings and tracking
It should not matter much that there is a difference. The shading losses calculated in the shade scene are applied to the output determined by the system size as it's defined in the system menu. As you've noted, the motor gaps and joint gaps (depending on the manufacturer) are not considered which leads to the discrepancy. You will need to delete tables to get them to equal, but this may not result in a material difference. -
Are there any plans to include the clipping correction button under the known format method? Certain weather data sources provide sub-hourly data in time-series format in a single file, which is useful to import through the known format menu, since it will generate each year automatically into a separate weather files. Otherwise, we will be required to separate the file into 22 files then import through the custom format each file. The other option would be to allow the custom import feature to import more than one year, automatically generating a new file for each year (and not have the menu pop up each time asking the user to check the file, which should come only at the end of the file).
-
go into backtracking management menu inside shade scene and set backtracking parameters, and into the diffuse tool in the shade scene and set the selections. there could be general issues with moving to new version
-
https://www.pvsyst.com/help/index.html?iam_loss.htm
-
-
When you multiply a random variable by a constant, the mean gets multiplied by the same constant and the variance gets multiplied by that constant squared. Xi are iid random variables.
-
For any single year that is the case. For multiple years, the uncertainty due to annual variability decreases, so this parameter needs to be separated out and used with the equation in my previous post, and then added to the other uncertainty quantities in full. My example was not quite correct in this regard.