Examples of Tasks You can Perform Using Tuner
The following section provides a list of tasks that you can perform using tuner, along with a sample JSON code for each task.
To Create a Data Store
{
"Historian Node": "10.181.213.175",
"Data Management": {
"Create Datastore": [
{
"Datastore Name": "Turbine-4",
"Properties": {
"Default Datastore": true,
"Description": "Custom datastore for storing data of Turbine-1"
}
}
]
}
}
- Datastore Name: Can be a sequence of characters surrounded with ".
- Default Datastore: Enter true to set a data store as the default one.
What can you do with the operation?
Create the default data store. You can also create multiple data stores by providing proper details in the JSON file.
Purging a Data Store
{
"Historian Node": "10.181.213.175",
"Data Management": {
"Purge": [ { "Datastore Name": "Turbine-4" } ]
}
}
- Datastore Name: Can be a sequence of characters surrounded with ".
What can you do with the operation?
Delete Turbine-4 from your system.
Purging Archives based on Archive Name
{
"Historian Node": "10.181.213.175",
"Data Management": {
"Purge": [
{
"Datastore Name": "Turbine-10",
"Properties": {
"Archive File Names": [
"Turbine-10_historian-archiver_Archive046.iha",
"Turbine-10_historian-archiver_Archive1543363199.iha"
]
}
}
]
}
}
- Data store name: Can be a sequence of characters surrounded with ".
- Archive File Name: Can be a sequence of characters surrounded with ".
What can you do with the operation?
Delete Turbine-10_historian-archiver_Archive046.iha and Turbine-10_historian-archiver_Archive1543363199.iha
Purging Archives based on Time stamps
{
"Historian Node": "10.181.213.175",
"Data Management": {
"Purge": [
{
"Datastore Name": "User",
"Properties": {
"Start Time": 1543417800,
"End Time": 1543418220
}
}
]
}
}
- Data store name: Can be a sequence of characters surrounded with ".
- Start Time/End Time: Must be in epoch time format, in seconds.
What can you do with the operation?
Delete the data between the given timestamps. This will delete entire archives with/between these timestamps.
Backup of Archive files using File Names
{
"Historian Node": "10.181.213.175",
"Data Management": {
"Back Up": [
{
"Datastore Name": "User",
"Back Up Path": "/data/",
"Properties": {
"Archive File Names": [
"User_historian-archiver_Archive1543449599"
]
}
}
]
}
}
- Data store name: Can be a sequence of characters surrounded with ".
- Back-Up Path: Must be a valid path in context of the Historian docker container.Note: The Back Up Path parameter must always be set to /data/. However, the backup is created in the /data/database folder.
- Archive file name: Must be valid archive names. You can provide multiple archives separating with comma.
This will back up the provided archive file to the data/backup folder.
Backup of Archive Files using Number of Files
{
"Historian Node": "10.181.213.175",
"Data Management": {
"Back Up": [
{
"Datastore Name": "User",
"Back Up Path":"/data/",
"Properties":
{
"Number Of Files":2
}
}
]
}
}
- Data store name: Can be a sequence of characters surrounded with ".
- Back-Up path: Must be a valid path in the context of the archiver docker container.Note: The Back Up Path parameter must always be set to /data/. However, the backup is created in the /data/database folder.
- Number of files: Number of files to be backed up. Should be a numerical value.
Back up the lat two archive files to the backup folder.
Backup of Archive Files using Start time and End Time
{
"Historian Node": "10.181.213.175",
"Data Management": {
"Back Up": [
{
"Datastore Name": "User",
"Back Up Path":"/data/",
"Properties":
{
"Start Time" :1540511999,
"End Time" :1540598399
}
}
]
}
}
- Data store name: Can be a sequence of characters surrounded with ".
- Backup path: Must be a valid path in the context of archiver docker container.Note: The Back Up Path parameter must always be set to /data/. However, the backup is created in the /data/database folder.
- Start/End Time: Must be an epoch timestamp.
Back up the data between the given timestamps. This will backup entire archives with/between the timestamps.
Restore
{
"Historian Node": "10.181.213.175",
"Data Management": {
"Restore": [
{
"File Path": "/data/User_historian-archiver_Archive1543507756_Backup.zip",
"Archive Name": "User_historian-archiver_Archive1543507756",
"Datastore Name": "User"
}
]
}
}
- File Path: Path of the backed-up file.Note: The File Path parameter must always be set to /data/<name of the archive file>. However, the archive file is located in the /data/database/ folder.
- Archive Name: Name to which data to be restored.
- Data store Name: Name of the data store for which the archive file must be restored.
Restore the backed-up files into specific data store.
Data Store options for Archive Type Hours/Days
"Datastore Name": "ScadaBuffer",
"Properties": {
"Archive Type": "Hours",
"Archive Duration": 10,
"Archive Active Hours": 10,
"Archive Default Backup Path": "/data/archiver/backupfiles/",
"Datastore Duration": 4
}
- Archive type: Valid values are Hours, Days, and BySize.
- Archive duration: Must be a numerical Value.
- Archive active hours: Must be a numerical Value.
- Archive default backup path: Must be a valid path.
- Data store duration: Must be a numerical value.
Set the data store properties as mentioned in the configuration file.
Data Store options for Archive Type BySize
"Datastore Name": "DHSSystem",
"Properties": {
"Archive Type": "BySize",
"Archive Default Size(MB)": 200,
"Archive Active Hours": 744,
"Archive Default Backup Path": "/data/archiver/backupfiles/"
}
- Archive Default Size(MB): Must be a numerical value. Rest keys can be referred from the preceding examples.
Set the data store properties as mentioned in the configuration file for the archive type BySize.
Tag Options-Collection Properties
{
"Historian Node": "10.181.213.175",
"Config": {
"Tag Options": [
{
"Tag Pattern": "US-TestTagsChange1.Objects.Demo.Dynamic.Scalar.Byte",
"Tag Properties": {
"Collection": {
"Collection": true,
"Collection Interval Unit": "sec",
"Collection Interval": 5,
"Collection Offset Unit": "sec",
"Collection Offset": 1,
"Time Resolution": "sec"
}
}
}
]
}
}
- Collection: Must be true/false.
- Collection Interval Unit: Must be sec, min, hour, or millisec
- Collection Offset Unit: Must be sec or millisec.
- Collection Interval and Collection Offset: Must be a numerical value. Note: You can filter tags based on the tag names, collector name, and data store name. To do so, replace Tag Pattern with Collector Name or Datastore Name.
What can you do with the operation?
Set the tag properties as mentioned in the configuration file.
Tag Options-Compression Properties
{
"Historian Node": "10.181.213.175",
"Config": {
"Tag Options": [
{
"Tag Pattern": "US-TestTagsChange1.Objects.Demo.Dynamic.Scalar.Byte",
"Tag Properties": {
"Collector Compression": {
"Collector Compression": true,
"Collector Deadband": "Percent Range",
"Collector Deadband Value": 80,
"Collector Compression Timeout Resolution": "min",
"Collector Compression Timeout Value": 10
}
}
}
]
}
}
- Collector Compression: Must be true or false.
- Collector Deadband Value/Collector Compression Timeout Value: Must be a numerical value.
- Collector Deadband: Must be Percent Range or Absolute.
- Collector Compression Timeout Resolution: Must be sec, min, hour, or millisec. Note: You can filter tags based on the tag names, collector name, and data store name. To do so, replace Tag Pattern with Collector Name or Datastore Name.
Set the compression properties as mentioned in the configuration file.
Tag Options-Scaling
{
"Historian Node": "10.181.213.175",
"Config": {
"Tag Options": [
{
"Tag Pattern": "US-TestTagsChange1.Objects.Demo.Dynamic.Scalar.Byte",
"Tag Properties": {
"Scaling": {
"Hi Engineering Units": 100,
"Low Engineering Units": 0,
"Input Scaling": false,
"Hi Scale Value": 0,
"Low Scale Value": 0
}
}
}
]
}
}
Set the scaling properties as mentioned in the configuration file.
Tag Options-Condition Based Collection
{
"Historian Node": "10.181.213.175",
"Config": {
"Tag Options": [
{
"Tag Pattern": "US-TestTagsChange1.Objects.Demo.Dynamic.Scalar.Byte",
"Tag Properties": {
"Condition Based Collection": {
"Condition Based": true,
"Trigger Tag": "US-TestTagsChange1.Objects.Demo.Dynamic.Scalar.Boolean",
"Comparison": ">=",
"Compare Value": "50000",
"End Of Collection Marker": true
}
}
}
]
}
}
- Trigger Tag: Must be a valid tag name.
- Comparison: =,<,<=,>,>=,!=
- End of Collection Marker: true or false Note: You can filter tags based on the tag names, collector name, and data store name. To do so, replace Tag Pattern with Collector Name or Datastore Name.
What can you do with the operation?
Set the condition-based collection properties as mentioned in the configuration file.
Tag Options- Using Tag Group
{
"Historian Node": "10.181.213.175",
"Config": {
"Tag Options": [
{
"Tag Group": [
"US-TestTagsChange1.Objects.Demo.Dynamic.Scalar.Boolean",
"US-TestTagsChange1.Objects.Demo.Dynamic.Scalar.Byte"
],
"Tag Properties": {
"Tag Datastore": "ScadaBuffer",
"Data Type": "Int16"
}
}
]
}
}
- Tag Group: Must be a valid tag name. You can provide any number of tags.
Set the tag properties to the group of tags mentioned in the Tag Group section.