Skip to main content

An Introduction to the DuraMAT DataHub Webinar (Text Version)

An Introduction to the DuraMat DataHub was delivered April 8, 2019, as part of the DuraMAT monthly webinar series.

In the webinar, Robert White of NREL discusses the DuraMAT DataHub, a central collection point for all the data generated by the DuraMAT consortium. The webinar includes a brief overview of the DataHub, followed by a walkthrough.

Teresa Barnes: Good afternoon and thank you all for coming to our third DuraMAT webinar. My name is Teresa Barnes, and I'm the director of the DuraMAT Consortium. Our goal with these webinars is to provide some basic information about DuraMAT work and other topics that inform and guide how we do DuraMAT work. One of the things that is very central to DuraMAT's mission is a central data hub or a central data resource for heterogeneous PV data.

So, today, we're going to hear about the DuraMAT DataHub. Historically, PV data has been very difficult to share and exchange between projects because it's highly heterogenous. A lot of it can be proprietary. Sometimes it's incomplete. Often, time series data requires extensive filtering and quality control to be usable. Materials data may come from a variety of different instruments and locations. Today, we're going to learn about the DuraMAT DataHub including how to store and access data in there. We're learning how to access it programmatically through an ATI.

Our speaker today is our data infrastructure lead, Robert White, who's a data scientist at NREL and he also works with several of the other energy materials network subs on their DataHub development. We'll discuss how the EMN data platform works and how it relates to DuraMAT and influences what we do here. He's going to walk us through actually accessing data on the DataHub and showing us how to work with a couple different kinds of data. As you guys know from previous webinars, the way that we do this is we'll have you submit questions through the questions tab on your webinar toolbar. We'll ask you to hold those till – we'll do the Q&A at the end. Everyone is muted, and we will unmute you at the end if you submitted a question. But it's kind of hard to have people unmuted during the talk because it gets pretty loud. Anyway, with no further ado, we'll let Robert White take over and teach us about the DuraMAT DataHub.

Robert White: So, I'm Robert White. I'm the data infrastructure lead here for the DuraMAT project and a data scientist here at NREL.

We have several new people here within that project. We also have some old hands. I kind of want to go back over a little bit of an overview of exactly what the EMNs are about and the DataHub is about. Then, we'll go through a walk-through of the actual website, and show you how to upload and download data out of this site. Then, also, show you how to use software to actually access data within the DataHub. Starting off, we'll look at the energy material networks. Back in 2015, the DOE came up with this concept to be able to create a set of virtual laboratories addressing some particular energy material question. Whenever you're going to distribute all of your resources across the U.S. on a variety of DOE installations and universities and companies, you have to be able to provide a platform that they can work from to be able to both share and archive the data.

We spent kind of our first year between all of the data teams on the various EMNs trying to figure out exactly how we were going to do that. We began to analyze the various platforms that were out there that we can leverage. The idea was to try to reduce our resource cost by kind of sharing all the development across the various EMNs. We ended up landing on the CKAN data sharing platform as the one we wanted to work with. It could get us most of the way there, but we actually needed its capabilities to be able to customize some of the interface.

If we were going to customize the interface, we wanted to make sure that it could meet the various needs that were going to be addressed by the various EMNs. Including things like the different work flows, the goals, the instrumentation. This provided as both the generalities we needed for a template to use in all the EMNs and also the capability to be able to customize it to meet our needs. So, what is the DataHub? Well, it actually provides a floor for a variety of different types of work that you'd like to do.

Included in those is like doing Round-Robin experimentation. It also allows you a place that you can store your data from a publication and provide a link in the publication then to the data. It's a secure archive that you can actually put just about any sort of data file into it that you'd like. It has a particular affinity for CSB files. We'll look at that a little bit later when we start diving into both the bulk through and looking at the API.

It has the ability to capture metadata. That's a customization that we helped to build up for that because, oftentimes, metadata is missing and that's the kind of stuff that provides context to your data and actually turns it into information. It does have some quick visualization capabilities. There's been other plug-ins built for some of the other EMNs to be able to visualize different types of data. It also provides a programmatic access into the data as part of the CKAN API. What it is not, it is not version control.

So, you can't work with multiple versions up there and expect them to detract. There is a plug-in that some people are working on, but it is not ready for prime time. It is also not really conducive to be able to store large time-series types of data. Anything larger than two gigabytes and if it's dynamic data that's being appended every day, this is not the platform to do that. We're actually building a separate platform offline from the DataHub to be able to house and handle time-series data far more efficiently.

If it ends up that you're wanting to work with time-series or allow us to store time-series data, you need to get in touch with me and we can talk about how to import your data and be able to access it. So, now, I want to address a little bit about what the data management structure is inside the DataHub, because once I start through the walk-through, this will kind of allow you to understand exactly how I'm moving through it. In the DuraMAT project, we've set the upper level projects within it to be sort of an association, trying to group the additional studies underneath it amongst a common umbrella. So, those sub-projects or studies, in this case, from PV Field Data, we've got a degradation study, something about Enphase Microinverters and then an NREL Soiling Map down there. Those are all kind of relatively connected to each other. The next level is the datasets that are stored inside of each one of these studies or sub-projects. Each one of the sub-projects can hold any number of datasets that are in there.

You can kind of consider a dataset as a folder, to be able to – the same that's on your computer, be able to hold any number of files within it. In this example, if you're looking at Denver, you can see there's a variety of different types of files. In fact, the DataHub can hold just about anything inside of it. This is the CSV file, TXT files, .zip files can be held inside of it. Now one thing to point out is that we start applying security to the data at the dataset level. So, everything up to the dataset can be seen by anyone that gets onto the website.

That includes like abstracts, information about the project itself. However, you can't see anything below that if the dataset has been set to private. There are only two setting for the security. That's private and public. Later on, as some data has been deemed to be publicly available, we can actually change that and allow that public data to be displayed. So, at that point, we're going to switch over to the walk-through of the actual site.

So, if you go to DataHub.DuraMAT.org, this is the site you're going to land on. I'm not going to login right now because I kind of want to give you guys an idea of what it looks like if you're a member of the general public and you just come to the website, so you can see what's blocked and what's available to be seen. I also want to mention that if you happen to be a member of the DuraMAT and you've logged into this before and you've got your credentials, but you haven't been put into any projects, your experience is going to be almost identical to a member of the public. There's going to be very little else that you're going to be able to see inside of there. So, the first thing I also want to show you is over here on the side is the help – well, it's kind of buried under the edge of that up there. But there is a "Help" _____ – okay. There is a "Help" right there.

If you click on that, it'll get you to a small webpage that has a link, both the user's manual, and a set of data standards guideline for the DuraMAT project. So, you can open those up and a PDF viewer directly inside of this, or you can actually download them. I also want to mention they can be found inside of a project, inside of this DataHub called DuraMAT Help. So, if we go back to the homepage here, I want to show that these four buttons down the middle are kind of like your easiest access point.

If you have not registered for the DataHub, you can click the "Register" button. That will take you to fill out some information, at which point you'll probably be getting an email after that from either Courtney, who's working with us as both project manager and system administrator for the EMNs, or from myself, to get you set up inside some of the projects. "Discover," opens up the various datasets to allow you to get into them, look at them. "Submit Data" sets you up to be able to upload data, but that only works if you've logged into the system and you have a project to upload data to.

Then, the, "Data Tools" that we'll look at in a few minutes. That gives you access to the various data tools for analysis that are being built to support the DuraMAT project. Across the top, you've got a mini bar that goes from "Home", "Projects", "Data", "About", "Help." I'm going to start by just going into the "Projects" to show you what that looks like. On the "Projects" page, on the left-hand side, is actually tree of the various levels of the different projects.

As I mentioned before, we're using the upper-level point of the projects as a means to kind of aggregate similar studies underneath each one of them. So, you don't see much about the projects here. Actually, the center band with all the graphics on it is almost identical to the band on the left-hand side, too. But if you look at just one of the sub-projects within, let's say the coatings here, what that pops up is actually the name of the actual study that was funded, who the recipient was and the PI. They'll see who's supporting that, what the current status is and the abstract behind it.

If you notice down at the bottom, there's a part that says "Datasets" and it says, "0 Datasets" in it. That does not necessarily mean that there is actually zero datasets in there. That just means that you haven't logged in and you're not part of the org. If you have logged in, you're not part of that project so you don't get to see that there's any datasets in there. So, when we login, I'll show you that that changes drastically.

If I go back to the projects, I want to kind of dive into one of them, down at the bottom of it that does have public data in it, if I'm not logged in just yet, on the NREL Soiling Map. So, this is an associated study that wasn't directly tied to DuraMAT, but a lot of its data is important to what we're doing inside the DuraMAT project. This one was done as a separate study that's supporting some development of a soiling map being produced by NREL. If you notice, it says – it has three datasets that indicates those are probably public. If I go in there, there are three datasets in here.

All three of them are public resources. Occasionally, you might see three public datasets in here, but there might be private ones that you can't see either that are being hidden. One thing I do want to point out is across the top here is something called breadcrumbs. It's basically – it's going to leave a trail of links leading you back to the home point, if you need to follow your way through the system.

Along the left-hand side, you've got a series of "Tags", "Institutions", "Data Source Types". A lot of these are filled out automatically when data is uploaded, or a dataset is created, and they provide the information to allow you to filter whatever is in the main window in the middle there. If I go into the soiling data, there's all the files that are sitting inside of this resource. There are a whole series of CSV files. I'm going to open up one.

I do want to point out, though, there's a couple of numbers up here at the top. One says, "Project ID," and it's a big, long string of characters up there. The other one's the dataset ID. Those are going to be important when we start talking about API access into the DataHub. You will need some place to go to retrieve this information and that's the address of the item that you'd be looking for. So, getting back to the actual file, if I open up the data site CSV, it's going to spin a little bit and then it's going to display what looks an awful lot like a CSV table. That's exactly what it is. The nice thing about CSVs is there's actually digested by the DataHub as part of its database and it provides a searchable and downloadable information directly from both programmatic means and from inside the DataHub. In fact, if you'd like to display stuff, you can actually graph it, at which point it's going to ask you what type of graph you'd like to make.

We're going to do, "Choose Dates." Let's look at rainfall. It's going to show you what the rainfall periods are from January to April for this. But we're not looking at all of the records because, up here, it says there's actually 368 records and we're only looking at 100. So, let me switch over to actually the soiling, which is this plot. Then I'm going to increase to all of the records.

Nope, not that number. Let's try this number. There it gives us the whole picture of it. Now this is not what I would call useful to analyze, but it is useful to actually do a kind of sanity check of your data to kind of look at it or to find a dataset that you're interested in that you'd like to make use of.

Also, up at the very top is a set of three buttons. The first button here, "Email a Maintainer," if you click that, it's going to provide you with an email prompt to allow you to actually contact the owner of this dataset if you have questions about it. The next button is to download. What that will download is that actually – that CSV file we were looking at. Then, this last one will play some importance later on. It's the "Data API" button. What it provides is a series of things that you can do programmatically inside of this dataset or to the dataset.

Now, at the bottom, it's going to provide a series of examples of how to access this data programmatically. Down at the bottom where it has the examples for Python, we're going to be looking at that in detail a little later on. It doesn't seem to want to open, though. Everybody's probably doing it out there.

So, I want to back up back to "Projects" real quick, take a look at another way that the data can actually be analyzed or looked at anyway. If we look at the Enphase Microinverter Study, it also has a series of five datasets that are public. This is actually the example that was the slides that we saw earlier. If you click the Denver one here and open it up, inside is what the CSV and the series of .zip time-series CSVs go down inside of it along with a TXT file about the project itself.

If you click "View" on this, you also get to bring up another CSV. But this one's a little different – as soon as it opens. It has two key columns in it, latitude and longitude. So, as part of the CSV system, I can click on that and actually bring up a map of the actual locations. In the DuraMAT, we deal a lot with locations where installations are set at. This can be particularly useful for us. Clicking on any one of these tags actually will bring up the whole record that is inside that CSV. Likewise, it also has the Download and the Data API. We're going to try to access the data programmatically here in a little bit. I'm going to walk the breadcrumbs back up here to Home, real quick. Then we're going to look at what happens when you – or first look at the "Data Tools".

Take that back. This can be the only point I can get to it. Inside the "Data Tools" are various analysis tools that we're building up to the DuraMAT project. There's four of them inside here. Three of them were built as part of the DuraMAT project.

The other one, already tools at the bottom, is the associated tool that's incredibly important to the work we're doing. We'll keep adding onto these tools, as time goes on, as we find more and more useful tools to support the work we're doing. There is an "Email Maintainer" button to allow you to contact the developer of that tool. If you click on the links, where it's going to take you is to a GitHub repository where this tool is stored at currently. We do have some tools under development right now that have their own websites. We will link to those also along with their code. So, now I'm going to jump back, actually, to where – the "Data" tab across the top.

The "Data" tab kind of strips away the project from it and just shows you the sets of – or the datasets that you can actually get to. Notice that all of these are public inside of here. That's because I'm not logged in. I'm about to go through the login process. This is going to change up for us.

So, when you click the login, it's going to take you to kind of an outside site. We're going to do your login. I believe eventually that we're setting up so that once you've logged into one of these EMNs where hopefully it will be set that it will get you into all of the EMNs. Some people do work on multiple EMNs and this kind of saves on them having to log into each individual one.

Now that I've logged in, there's been a little bit of a change. There's private datasets starting to show up down here. Also, the tags have increased. There's some that have popped up like, now, an SNL for Saturday Night – or for not Saturday Night Live but it would be for Sandia National Labs.

Saturday Night Live would be funnier than they are. So, now I want to kind of show you, first of all, your user ID number. Because for any sort of programmatic access to this, unless it's a public dataset, you're going to need some sort of login creds to be able to talk to it programmatically. If you're inside the website, you're not going to need the information.

So, what you need to do is click your name, up here at the top. It's very slowly going to try to pitch in my credentials. We have a lot of people logged in, I guess. So, down here, towards the bottom-left is that same string of numbers. This same string of numbers is something that you would need for yourself to be able to access the data. So, if I go back into the data, also, there's a button that has now shown up on that page up at the very top. If you're not logged in, this button does not appear up here.

But once you're logged in, you can start adding datasets into it. But you will need a project to be able to add the dataset to. So, we're going to go ahead and add some data. This is what happens. It'll pop up with a little display like this, and, "A dataset." So, add a new dataset to this. We'll add a description.

Then it's going to ask you for tags. Now these tags, there's no set rules to what you put in there for tags. But think of the things that would make it easy to be able to search for this kind of information. So, I'm going to put, "Test," in there and then put, "NREL" in there. So, that will be something that I filter a search from if you're on the actual data site. Right down here, you'll notice this is the author. It automatically sets to whoever you're logged in as currently. And your email address.

There's a DOI tag here. We can tag these datasets, so they can be cited later. Tag in here for NREL. Then, for the data source type, we're going to – let's see – we're just going to put in documentation of pictures for right now just to make it easy. Depending on what you select down there, you may be prompted for some additional information. The next step is to actually add your first set of data to this.

Now you don't have to add it here. You could bump out from this point and actually go out and add the data from the different location. But we're going to go from here for now. So, I'm going to upload some data to this. I'm going to pick a CSV file here that I've got that's a JV Curve. There is a way to select the type of format that you'd like this viewed as. It's going to automatically detect the extension on the file at the start anyway. So, you don't really have to tell it's a CSV or JSON file or a JPEG or any sort of image.

It'll figure that out at the start. Data quality, we're going to say it's pretty good. I hope it is. JVIV Curve, and at this point, there's a whole new set of fields down here at the bottom of it. Now, depending on what you've chosen up there for the data type, this field will change. The idea is to try to gather as much metadata as possible that's important for understanding the type of file that you're uploading to the system.

So, go through this. NREL Collection date, we'll say yesterday. Type, it's a light curve. Then I'm just going to throw some numbers in here. I'll delete this later on. Then, once you're done with that, you can finish. You can also save at this point and keep adding additional data files, if you'd like, from this point with the "Save and add another." But we're just going to write it up there.

Now, if we pop back up, we see that the project, now automatically set for private because all new datasets being created are set automatically to private. We need you to think real carefully about the data you want to release to the public and not just automatically send stuff out everywhere. The resource is not in there. It's kind of got the little CSV symbol next to it. If we view it – hopefully, we'll jump to that page in just a second, as it continues to think about it. It's good that everyone's getting online. We're actually exercising the system to see how much we can take right now. So, as it continues to spin its wheels – there it is. It pops up. It is read in now. The whole CSV has actually digested that into the database for this.

Now, if I wanted to see that JV Curve, there it is. Not something you'd probably want to publish with that curve, but it at least gives you an idea of what it looks like. All of that additional metadata I added is down here at the bottom. So, not only is this metadata coriable, every element within this curve is actually coriable, too. We'll go over that later on.

Since I am the owner of the data, there is now a "Manage" button up here that allows me to adjust some of that metadata if I happened to have entered something wrong. Going to back up here to the dataset level and point out over here on the side, this allows you to kind of look at the metadata for the whole dataset. Let's say I decided, "Oh, this is actually public data I didn't share. If anybody sees this information, it's okay to get it out there." If you edit the metadata, there's a field here that wasn't there when you were first creating the dataset.

It's the "Visibility" and that's where we set the security. So, if I'm going to set this "Public", I can then update it. Now it's a public site. So, anybody that logs onto the network now, whether they're in or not, can now see this dataset. If you wanted to add an additional resource to this, new file and everything like that, you can click the "Edit Dataset," or, "Edit Resources" down at the bottom.

There's a button up here now called for, "Add New Resource". There's another button here called "Reorder Resources", which just allows you to move around on the screen, to move maybe the important ones to the top. If this "Add New Resource" is exactly like this button and the process was just exactly what we did a few minutes ago. Moving back up to "Projects" here. We should be able to see the new project that just popped in there.

Maybe. Hello? Okay, yes. So, under "Test 1", which is where we were putting that one, we should be able to see a new project. I want to mention that, also, everybody, if you're part of the DuraMAT team and you've got your login credentials, we're going to go ahead and give you automatic membership into the test project as soon as we can. That allows you to at least upload or download or do whatever you want with that, dealing with a real project, in case you just want to test what you're doing, see how well it works. We've got several of you in there now, but I don't know that we've got everybody into the project just yet. Lender the datasets – yes, we are exercising this, I can tell. Here's the sample follow-up.

Yeah, there's the one we actually worked at earlier. So, now, I'm going to go back Home. That's going to finish up that part of it. Now the one that I have a lot of questions on is, "How do you actually access this thing if you don't want to use the GUI and go through the web application. I want to say I have a demonstration. I type on Notebook. It is up there on my public GitHub page.

But to get to it, if you actually are in the DataHub and you go to the projects and look for "DuraMAT Help" down there, click on "DuraMAT Help", there's a help and tutorial file in there. There's two resources. One of them is a link to actually getting to where this page is stored at. So, consider that a task that you could do. But we'll walk through this.

[Talks to someone in the background]

The lights are not changing all the way. They're just slow. Okay, all right. So, a shout-out to Nick Wonder for actually creating the first one of these little walk-throughs that he did for the High Shin Project. I kind of glommed onto his, kind of adjusted it for what we're doing inside the DuraMAT and based on some of the questions I was getting from people at the various conferences we were at.

The top part of this contains a lot of information that can be gleaned from the documentation for both the CKAN platform and the various elements that – modules that we're using to actually access the data. So, there's links all through here for getting you up to speed on that. I'm going to go ahead and set up and run the imports for part of this. This little section goes into basically the layout of the data structure within the DataHub.

This matches where we were talking about earlier when I was showing the slide of the data management structure. The use cases I've got in this file that we're going to deal with is, "How to get an existing set of projects out of it? How to get your specific sets of projects? How to pull out all the project details?" Then, the final one is, "How do I query out a particular data record out of a CSV resource?"

So, the first thing about using the API, is to understand this is built on a rest API format which means that each one of the queries you do kind of looks like a great big, long, strange URL string. Each of the requests will consist of, the first part of it being the DataHub URL, followed by this structure, this api3/3/action. That will be repeated for every one of them. Then the last part of it is what you're kind of wanting to do, the action you want to do.

As pointed out here in this action statement, the actions are kind of a noun/verb combination. So, like "Project List," or, "Data, Store, Search", things like that. Those are all listed in the API documentation, by the way. So, the next thing I'm going to try to do is try to build up the – excuse me for zipping around here – build up the initial URL that we're going to use to work with. The first line of this, actually, is just a variable, just to hold the EMN DataHub's name.

So, you can actually put any of the datahubs here but we're DuraMAT Central right now. Next line is actually creates a string using lambda function. So, they think – I tune that api3 action in the middle of it and that it'll actually take whatever you're wanting to do as your action and mix them altogether. Wicking that shows that we get this right here, this link.

You can actually click this. When we click it, this is what it brings up. I don't know if you guys are caught up to that yet. But this is the response back from the system. It literally has all the project lists in it. So, what we're doing is basically this, but we're going to bring it back programmatically into the system, so we can look at it differently.

The next thing to think about is that API token I was talking about earlier, that everybody has their own credentials to get in there and that's based on your API token. I've got mine up here on the screen right now. If you downloaded this from the GitHub site, it will be blank. So, you'll have to fill it in with your own. All you have to do is login to the DataHub, click your user name at the top and then find your API token down on the left column and then plug that in here. Then, any time you send a response into the CKAN system, you get two possible results.

One of those is going to come back and say, "Your success is true," and it's going to have a big block of data that gets returned inside the results. The other thing that you're going – might get is an error. It's going to be a much shorter statement, but it's going to come back with your success being false, along with the error in it. There's also a couple of bits of helper information that get passed through, too.

So, I'm looking now at pulling out the project list, what we just saw on the webpage a moment ago. We're going to actually pull down this way. A lot of these steps will be repeated as we go down farther, over and over again. The first thing is we pass into the UR lip module, the actual URL we want, and it will actually generate a request for us. Then the next thing we want to do – I kind of do this out of habit, even though this is a public data site we're going to pull on, in just a second, just go ahead and pass in your authorization every time you do it. All you have to do is do this "Add Header", at which point just add the authorization and then API token that you were using. The next spot is actually to initiate the request itself. That is with this URL Open Statement. The response with which you get back is that big block of data we just saw a moment ago. We're going to format that down into a low-meter JSON statement and then look for only the results coming out of it. At that point, it should go out to come back. This is what we get in response.

So, this is very similar to what was up on the page. It may not look as pretty as it was earlier, but this has all of the projects in the DuraMAT DataHub inside of it. See, there's the name for one of them right there. Scroll down farther, there's the name of the next one down there and so forth. Since this is all public information, everything comes down easily when you query. You didn't really need the credentials. The next one is to try to look for your project list.

One of the best ways to do this is kind of just look for the various datasets where you have permission to do something that only a member would have that capability and that's to create a dataset. Only members can create datasets within any project. So, to do that, you're going to create a little Python dictionary named Grahams and pass it in one-key permission and value of just the string-create dataset. Then it gets passed into this little function call URL end code that turns it into a string of the proper format that you'll need for a rest interface.

Then, assemble the whole thing together using an action statement. We're looking for a project list for a user, a little separator of a question mark, and then that parameter string is put together. Now these next steps from seven on down are identical to what we did a few moments ago. At this point, you get a much shorter list of just the ones that I'm allowed to actually create datasets within. There's about 6 of them there, instead of the 40 or so that are in the DataHub.

So, if you want to see what's inside of a particular project, one that you are a member of – because this is going to take you passed just the information about the data, about the project itself, this is going to take you actually down to the datasets. So, at this point, you actually do have to have permission to see this. So, I am looking at a public dataset. So, just in case your credentials won't get you into anything, this will still do it. You need the Project ID and that was something you could find at the top of any of the projects page. There was a little string of characters.

This is for the Enphase Microinverter Study. I've got their Project ID in there. So, the same thing. You're going to create a little Python dictionary of the parameters you want to pass in the ID with the Project ID. You want it to include the datasets, true, and you don't care about who the other users are. So, that's passed in as false. Then the next steps are all identical, creating the parameter string, initiating the URL and the request.

At this point, you can get all the information about the Enphase Microinverter project and it comes out in this big, long, blocky thing. But you can query out of this the various datasets that inside of it that you might want to look at. You can also take the query farther and actually look and see what the files are in each one of the datasets.

The last one is the one that most people were wanting to know about, "How do I actually get to the data, itself?" So, assuming that you do have permission to actually get to the data files and you need the resource ID. That can be found inside that little green button above any of the CSV file files. If you click on that, it'll bring up what that resource ID is. It also shows up in any of the buttons you would see marked as embedded or also actually carry that little ID inside of it that you can dig out of there. Once again, all you've got to do is set up a parameter dictionary. This time a resource ID.

This next one is a little different. It's a Q followed by the words Canadian Solar. Because what I'm going to do is I'm going to search that CLP for any records for Canadian Solar installations in the Denver area. This is just going to bring me out a list of all of those installations. The idea is to pull out the site ID so then I can go look at those data files, if I wanted to. Everything else, after that fact, is identical to what we've done before.

We run that, and we'll get a list of all of the Canadian Solar installations in the Denver that they were using for this site project. Here it says, "Canadian Solar," up here and then here's the list of them, one after another. At that point, then you can go querying for the data itself, if you wish. So, that kind of brings me to the end of the walk-through. So, –

[End of Audio]