We’re excited to introduce support for custom validators in Schema Enforcer. Schema Enforcer provides a framework for testing structured data against schema definitions using JSON Schema and now using custom Python validators. You can check out Introducing Schema Enforcer for more background and an introduction to Schema Enforcer.
What Is a Custom Validator?
A custom validator is a Python module that allows you to run any logic against your data on a per-host basis.
Let’s start with an example. What if you want to validate that every edge router has at least two core interfaces defined?
Here’s a possible way we could model our data in an Ansible host_var file:
In this example, each physical interface has a type key, which we can evaluate in our custom validator. JSON Schema can be used to validate that this field exists and contains a desired value (e.g., “core”, “access”, etc.). However, it cannot check whether there are at least two interfaces with this key set to “core”.
JMESPath Custom Validators
As a shortcut for basic use cases, Schema Enforcer provides the JmesPathModelValidation class. This class supports using JMESPath queries against your data along with generic comparison operators. The logic is provided by the base class, so no Python is required beyond setting a few variables.
To solve the preceding example, we can use the following custom validator:
from schema_enforcer.schemas.validator import JmesPathModelValidationclass CheckInterface(JmesPathModelValidation): # pylint: disable=too-few-public-methods top_level_properties = ["interfaces"] id = "CheckInterface" # pylint: disable=invalid-name left = "interfaces.*[@.type=='core'][] | length([?@])" right = 2 operator = "gte" error = "Less than two core interfaces"
The top_level_properties variable maps this validator to the interfaces object in our data. The real work is done by the left, right, and operator variables. Think of these as part of an expression:
This custom validator uses the JMESPath expression to query the data. The query returns all interfaces that have type of “core”. The output is piped to a built-in JMESPath function that gives us the length of the return value. When applied to our example data, the value of the query is 2. When checked by our custom validator, this host will pass, as the value of the query is greater than or equal to 2.
root@b295daf33db5:/local/examples/ansible3# schema-enforcer ansible --show-checksFound 2 hosts in the inventoryAnsible Host Schema ID--------------------------------------------------------------------------------az_phx_pe01 ['CheckInterface']az_phx_pe02 ['CheckInterface']
In the preceding output, we see the CheckInterface validator is applied to two hosts.
When Schema Enforcer is run against the inventory, the output shows if any hosts fail the validation. If a host fails, the error message defined in the CheckInterface class error variable will be shown.
root@b295daf33db5:/local/examples/ansible3# schema-enforcer ansibleFound 2 hosts in the inventoryFAIL | [ERROR] Less than two core interfaces [HOST] az_phx_pe02 [PROPERTY]root@b295daf33db5:/local/examples/ansible3#
Advanced Use Cases
For more advanced use cases, Schema Enforcer provides the BaseValidation class which can be used to build your own complex validation classes. BaseValidation provides two helper functions for reporting pass/fail: add_validation_pass and add_validation_error. Schema Enforcer will automatically call the validate method of your custom class for all instances of your data. The logic as to whether a validator passes or fails is up to your implementation.
Since we can run arbitrary logic against the data using Python, one possible use case is to check data against some external service. In the example below, a simple BGP peer data file is checked against the ARIN database to validate that the name is correct.
Sample Data
---bgp_peers:-asn:6939name:"Hurricane Electric LLC"-asn:701name:"VZW"-asn:100000name:"Private"
Validator
"""Custom validator for BGP peer information."""import requestsfrom schema_enforcer.schemas.validator import BaseValidationclass CheckARIN(BaseValidation):"""Verify that BGP peer name matches ARIN ASN information.""" def validate(self, data, strict):"""Validate BGP peers for each host.""" headers = {"Accept": "application/json"} for peer in data["bgp_peers"]: # pylint: disable=invalid-name r = requests.get(f"http://whois.arin.net/rest/asn/{peer['asn']}", headers=headers) if r.status_code != requests.codes.ok: # pylint: disable=no-member self.add_validation_error(f"ARIN lookup failed for peer {peer['name']} with ASN {peer['asn']}") continue arin_info = r.json() arin_name = arin_info["asn"]["orgRef"]["@name"] if peer["name"] != arin_name: self.add_validation_error( f"Peer name {peer['name']} for ASN {peer['asn']} does not match ARIN database: {arin_name}" ) else: self.add_validation_pass()
If we run Schema Enforcer with this validator, we get the following output:
root@da72aae39ede:/local/examples/example4# schema-enforcer validate --show-checksStructured Data File Schema ID--------------------------------------------------------------------------------./bgp/peers.yml ['CheckARIN']root@da72aae39ede:/local/examples/example4# schema-enforcer validateFAIL | [ERROR] Peer name VZW for ASN 701 does not match ARIN database: MCI Communications Services, Inc. d/b/a Verizon Business [FILE] ./bgp/peers.yml [PROPERTY]FAIL | [ERROR] ARIN lookup failed for peer Private with ASN 100000 [FILE] ./bgp/peers.yml [PROPERTY]
You could expand this example to do other validation, such as checking that the ASN is valid before making the request to ARIN.
For more information on this Schema Enforcer feature, see the docs. And if you have any interesting use cases, please let us know!
Does this all sound amazing? Want to know more about how Network to Code can help you do this, reach out to our sales team. If you want to help make this a reality for our clients, check out our careers page.
Recently Network to Code open sourced schema-enforcer, and immediately my mind turned to integrating this tool in with CI pipelines. The goal is to have fast, repeatable, and reusable pipelines that ensure the integrity of the data stored in Git repositories. We will be accomplishing repeatability and reusability by packaging schema-enforcer with Docker and publishing to a common Docker registry.
By integrating repositories containing structured data with a CI pipeline that enforces schema you are better able to predict the repeatability of the downstream automation that consumes the structured data. This is critical when using the data as a source of truth for automation to consume. It also helps to react faster to an incorrect schema before this is used by a configuration tool, such as Ansible. Imagine being able to empower other teams to make chages to data repositories and trust the automation is performing the checks an engineer manually does today.
How containers can speed up CI execution.
Containers can be a catalyst to speeding up the process of CI execution for the following reasons:
Having purpose built containers in CI allows for standardized pipelines with little setup time.
Sourcing from a pre built image to execute a single test command removes the need to build an image or manage a virtual environment per repository.
Reducing build times from using pre-built images allows for faster running pipeline and helps to shorten the feedback loop to the end user.
Example with privately hosted GitLab.
For today’s example I am using my locally hosted GitLab and Docker Registry. This was done to showcase the power of building internal resources that can be easily integrated with on-premise solutions. This example could easily be adapted to run in GitHub & Travis CI with the same level of effectiveness and speed of execution.
Building a container to use in CI.
Click Here for documentation on Dockerfile construction and docker build commands. The Dockerfile is starting with python:3.8 as a base. We then set the working directory, install schema-enforcer, and lastly setting the default entrypoint and command for the container image.
Dockerfile
FROM python:3.8WORKDIR /usr/src/appRUN python -m pip install schema-enforcerENTRYPOINT ["schema-enforcer"]CMD ["validate","--show-pass"]
Publishing schema-enforcer container to private registry.
Click Here for documentation on hosting a private docker registry. If using Docker Hub, the image tag would change to <namespace>/<container name>:tag. If I was to push this to my personal Docker Hub namespace, the image would be whitej6/schema-enforcer:latest.
For the first use case, we are starting with example1 in the schema-enforcer repository located here. We then add a docker-compose.yml, where we mount in the full project repo into the previously built container and create a pipeline with two stages in .gitlab-ci.yml, which is triggered on every commit.
➜ schema-example git:(master) ✗ tree -a -I '.git'.├── .gitlab-ci.yml├── chi-beijing-rt1│ ├── dns.yml # This will be the offending file in the failing CI pipeline.│ └── syslog.yml├── docker-compose.yml├── eng-london-rt1│ ├── dns.yml│ └── ntp.yml└── schema └── schemas ├── dns.yml # This will be the schema definition that triggers in the failure. ├── ntp.yml └── syslog.yml4 directories,9 files
Click Here for documentation on docker-compose and structuring the docker-compose.yml file. We are defining a single service called schema that uses the image we just publish to the Docker registry and are mounting in the current working directory of the pipeline execution into the container at /usr/scr/app. We are using the default entrypoint and cmd specified in the Dockerfile as schema-enforcer validate --show-pass but this could be overwritten in the service definition. For instance, if we would like to enable the strict flag, we would add command: ['validate', '--show-pass', '--strict'] inside the schema service. Keep in mind the command attribute of a service overwrites the CMD directive in the Dockerfile.
---version:"3.8"services:schema: # Uncomment the next line to enable strict on schema-enforcer # command: ['validate','--show-pass','--strict']image: registry.whitej6.com/ntc/docker/schema-enforcer:latestvolumes:-./:/usr/src/app/
Click Here for documentation on structuring the .gitlab-ci.yml file. We are defining two stages in the pipeline, and each stage has one job. The first stage ensures we have the most up to date container image for schema-enforcer and next we run schema service from the docker-compose.yml file. By specifiying --exit-code-from schema we are passing the exit code from the schema to the docker-compose command. The commands specified in the script are used to determine whether the job runs successfully. If the schema service returns a non-zero exit code, the job and pipeline will be marked as failed. The second stage ensures we are good tenants of docker and clean up after ourselves, docker-compose down will ensure we remove any containers or networks associated with this project.
---stages:- test- cleantest:stage: testscript:- docker-compose pull- docker-compose up --exit-code-from schema schemaclean:stage: cleanscript:- docker-compose down ||truewhen: always
Failing.
In this example chi-beijing-rt1/dns.yml has a boolean value instead of an IPv4 address as specified in the schema/schemas/dns.yml. As you can see, the container returned a non-zero exit code, failing the pipeline and blocking the merge into a protected branch.
chi-beijing-rt1/dns.yml
# jsonschema: schemas/dns_servers---dns_servers:-address:true # This is a boolean value and we are expecting a string value in an IPv4 format-address:"10.2.2.2"
schema/schemas/dns.yml
---$schema:"http://json-schema.org/draft-07/schema#"$id:"schemas/dns_servers"description:"DNS Server Configuration schema."type:"object"properties:dns_servers:type:"array"items:type:"object"properties:name:type:"string"address: # This is the specific property that will be used in the failed example.type:"string"format:"ipv4"vrf:type:"string"required:-"address"uniqueItems:truerequired:-"dns_servers"
Runner output.
We see exactly which file and attribute fails the pipeline along with the runtime of the pipeline in seconds.
Blocked Merge Request.
When sourcing from a branch with a failing pipeline, GitLab has the ability to block merging until the pipeline succeeds. By having the pipeline triggered on each commit we can resolve the issue on the next commit, which then triggers a new pipeline. Once the issue has been resolved, we will see the Merge button is no longer greyed out and can be merged into the target branch.
Passing.
Now the previous error has been corrected and a new commit has been made on the same branch. GitLab has then rerun the same pipeline with the new commit and upon passing the branch can be merged into the protected branch.
chi-beijing-rt1/dns.yml
# jsonschema: schemas/dns_servers---dns_servers:-address:"10.2.2.3" # This is the value that has been updated to align with the schema definition.-address:"10.2.2.2"
Runner output,
With the issue resolved and committed, we now see the previously offending file is passing the pipeline.
Fixed Merge Request.
The merge request is now able to be merged into the target branch.
As a network engineer by trade that has come into automation, it at times has been difficult to trust the machine that was building the machine let alone trusting others eager to collaborate. Building safe guards for schema into my early pipelines would have saved me a tremendous amount of time and headache.
Does this all sound amazing? Want to know more about how Network to Code can help you do this, reach out to our sales team. If you want to help make this a reality for our clients, check out our careers page.
These days, most organizations heavily leverage YAML and JSON to store and organize all sorts of data. This is done in order to define variables, be provided as input for generating device configurations, define inventory, and for many other use cases. Both YAML and JSON are very popular because both languages are very flexible and are easy to use. It is relatively easy for users who have little to no experience working with structured data (as well as for very experienced programmers) to use JSON and YAML because the formats do not require users to define a schema in order to define data.
As the use of structured data increases, the flexibility provided because these languages don’t require data to adhere to a schema create complexity and risk. If a user accidentally defines the data for ntp_servers in two different structures (e.g. one is a list, and one is a dictionary), automation tooling must be written to handle the differences in inputs in some way. Often times, the automation tooling just bombs out with a cryptic message in such cases. This is because the tool consuming this data rightfully expects to have a contract with it, that the data will adhere to a clearly defined form and thus the tool can interact with the data in a standard way. It is for this reason that APIs, when updated, should never change the format in which they provide data unless there is some way to delineate the new format (e.g. an API version increment). By ensuring data is defined in a standard way, complexity and risk can be mitigated.
With structured data languages like YAML and JSON which do not inherently define a schema (contract) for the data they define, a schema definition language can be used to provide this contract, thereby mitigating complexity and risk. Schema definition languages come with their own added maintenance though as the burden of writing the logic to ensure structured data is schema valid falls on the user. The user doesn’t just need to maintain structured data and schemas, they also have to build and maintain the tooling that checks if data is schema valid. To allow users to simply write schemas and structured data and worry less about writing and maintaining the code that bolts them together, Network to Code has developed a tool called Schema Enforcer. Today we are happy to announce that we are making Schema Enforcer available to the community.
Schema Enforcer is a framework for allowing users to define schemas for their structured data and assert that the defined structured data adheres to their schemas. This structured data can (currently) come in the the form of a data file in JSON or YAML format, or an Ansible inventory. The schema definition is defined by using the JSONSchema language in YAML or JSON format.
Why use Schema Enforcer?
If you’re familiar with JSONSchema already, you may be thinking “wait, doesn’t JSONSchema do all of this?”. JSONSchema does allow you to validate that structured data adheres to a schema definition, but it requires for you to write your own code to interact with and manage the data’s adherence to defined schema. Schema Enforcer is meant to provide a wrapper which makes it easy for users to manage structured data without needing to write their own code to check their structured data for adherence to a schema. It provides the following advantages over just using JSONSchema:
Provides a framework for mapping data files to the schema definitions against which they should be checked for adherence
Provides a framework for validating that Ansible inventory adheres to a schema definition or multiple schema definitions
Prints clear log messages indicating each data object examined which is not adherent to schema, and the specific way in which these data objects are not adherent
Allows a user to define unit tests asserting that their schema definitions are written correctly (e.g. that non-adherent data fails validation in a specific way, and adherent data passes validation)
Exits with an exit code of 1 in the event that data is not adherent to schema. This makes it fit for use in a CI pipeline along-side linters and unit tests
An Example
I’ve created the following directories and files in a repository called new_example.
The directory includes
structured data (in YAML format) defining ntp servers for the host chi-beijing-rt01 inside of the file at hostvars/chi-beijing-rt01/ntp.yml
a schema definition inside of the file at schema/schemas/ntp.yml
If we examine the file at hostvars/chi-being-rt01/ntp.yml we can see the following data defined in YAML format.
Note the comment # jsonschema: schemas/ntp at the top of the YAML file. This comment is used to declare the schema that the data in this file should be checked for adherence to, as well as the language being used to define the schema (JSONSchema here). Multiple schemas can be declared by comma separating them in the comment. For instance, the comment # jsonschema: schemas/ntp,schemas/syslog would declare that the data should be checked for adherence to two schema, one schema with the ID schemas/ntp and another with the id schemas/syslog. We can validate that this mapping is being inferred correctly by running the command schema-enforcer validate --show-checks
The --show-checks flag shows each data file along with a list of every schema IDs it will be checked for adherence to.
Other mechanisms for mapping data files to schemas against which they should be validated. See docs/mapping_schemas.md in the Schema Enforcer git repository. for more details. YAML supports the addition of comments using an octothorp. JSON does not support the addition of comments. To this end, only data defined in YAML format can declare the schema to which it should adhere with a comment. Another mechanism for mapping needs to be used if your data is defined in JSON format.
If we examine the file at schema/schemas/ntp.yml we can see the following schema definition. This is written in the JSONSchema language and formatted in YAML.
The schema definition above is used to ensure that:
The ntp_servers property is of type hash/dictionary (object in JSONSchema parlance)
No top level keys can be defined in the data file besides ntp_servers
It’s value is of type array/list
Each item in this array must be unique
Each element of this array/list is a dictionary with the possible keys name, address and vrf
Of these keys, address is required, name and vrf can optionally be defined, but it is not necessary to define them.
address must be of type “string” and it must be a valid IP address
name must be of type “string” if it is defined
vrf must be of type “string” if it is defined
Here is an example of the structured data being checked for adherence to the schema definition.
We can see that when schema-enforcer runs, it shows that all files containing structured data are schema valid. Also note that Schema Enforcer exits with a code of 0.
What happens if we modify the data such that the first ntp server defined has a value of the boolean true and add a syslog_servers dictionary/hash type object at the top level of the YAML file.
We can see that two errors are flagged. The first informs us that the first element in the array which is the value of the ntp_servers top level key is a boolean and a string was expected. The second informs us that the additional top level property syslog_servers is a property that is additional to (is not specified in) the properties defined by the schema, and that additional properties are not allowed per the schema definition. Note that schema-enforcer exits with a code of 1 indicating a failure. If Schema Enforcer were to be used before structured data is ingested into automation tools as part of a pipeline, the pipeline would never have the automation tools consume the malformed data.
Validating Ansible Inventory
Schema Enforcer supports validating that variables defined in an Ansible inventory adhere to a schema definition (or multiple schema definitions).
To do this, Schema Enforcer first constructs a dictionary containing key/value pairs for each attribute defined in the inventory. It does this by flattening the varibles from the groups the host is a part of. After doing this, schema-enforcer maps which schemas it should use to validate the hosts variables in one of two ways:
By using a list of schema ids defined by the schema_enforcer_schema_ids attribute (defined at the host or group level).
By automatically mapping a schema’s top level properties to the Ansible host’s keys.
That may have been gibberish on first pass, but the examples in the following sections will hopefully make things clearer.
An Example of Validating Ansible Variables
In the following example, we have an inventory file which defines three groups, nyc, spine, and leaf. spine and leaf are children of nyc.
Note the schema_enforcer_schema_ids variable. This declaratively tells Schema Enforcer which schemas to use when running tests to ensure that the Ansible host vars for every host in the spine group are schema valid.
Here is the interfaces schema which is declared above:
Note that the $id property is what is being declared by the schema_enforcer_schema_ids variable.
When we run the schema-enforcer ansible command with the --show-pass flag, we can see that the spine1 and spine2’s defined dns_servers attribute did not adhere to schema.
By default, schema-enforcer prints a “FAIL” message to stdout for each object in the data file which does not adhere to schema. If no objects fail to adhere to schema definitions, a single line is printed indicating that all data files are schema valid. The --show-pass flag modifies this behavior such that, in addition the the default behavior, a line is printed to stdout for every file that is schema valid indicating it passed the schema adherence check.
In looking at the group_vars/spine.yaml group above. This is because the first dns server in the list which is the value of dns_servers has a value of the boolean true.
bash$ cat schema/schemas/dns.yml---$schema:"http://json-schema.org/draft-07/schema#"$id:"schemas/dns_servers"description:"DNS Server Configuration schema."type:"object"properties:dns_servers:type:"array"items:type:"object"properties:name:type:"string"address:type:"string"format:"ipv4"vrf:type:"string"required:-"address"uniqueItems:truerequired:-"dns_servers"
In looking at the schema for dns servers, we see that DNS servers address field must be of type string and format ipv4 (e.g. IPv4 address). Because the first element in the list of DNS servers has an address of the boolean true it is not schema valid.
Another Example of Validating Ansible Vars
Similar to the way that schema-enforcer validate --show-checks can be used to show which data files will be checked by which schema definitions, the schema-enforcer ansible --show-checks command can be used to show which Ansible hosts will be checked for adherence to which schema IDs.
From the execution of the command, we can see that 4 hosts were loaded from inventory. This is just what we expect from our earlier examination of the .ini file which defines Ansible inventory. We just saw how spine1 and spine2 were checked for adherence to both the schemas/dns_servers and schemas/interfaces schema definitions, and how the schema_enforcer_schema_ids var was configured to declare that devices belonging to the spine group should adhere to those schemas. Lets now examine the leaf group a little more closely.
In the leaf.yml file, no schema_enforcer_schema_ids var is configured. There is also no individual data defined at the host level for leaf1 and leaf2 which belong to the leaf group. This brings up the question, how does schema-enforcer know to check the leaf switches for adherence to the schemas/dns_servers schema definition?
The default behavior of schema-enforcer is to map the top level property in a schema definition to vars associated with each Ansible host that have the same name.
bash$ cat schema/schemas/dns.yml---$schema:"http://json-schema.org/draft-07/schema#"$id:"schemas/dns_servers"description:"DNS Server Configuration schema."type:"object"properties:dns_servers:type:"array"items:type:"object"properties:name:type:"string"address:type:"string"format:"ipv4"vrf:type:"string"required:-"address"uniqueItems:truerequired:-"dns_servers"
Because the property defined in the schema definition above is dns_servers, the matching Ansible host var dns_servers will be checked for adherence against it.
In fact, if we make the following changes to the leaf group var definition then run schema-enforcer --show-checks, we can see that devices belonging to the leaf group are now slated to be checked for adherence to both the schemas/dns_servers and schemas/interfaces schema definitions.
O.K. so you’ve defined schemas for your data, now what? Here are a couple of use cases for Schema Enforcer we’ve found to be “juice worth the squeeze.”
1) Use Schema Enforcer in your CI system to validate defined structured data before merging code. Virtually all git version control systems (GitHub, GitLab…etc) allow the ability to configure tests which must pass before code can be merged from a feature branch into the code base. Schema Enforcer can be turned on along side your other tests (unit tests, linters…etc). If your data is not schema valid, the exact reason why the data is not schema valid will be printed to the output of the CI system when the tool is run and the tool will exit with a code of 1 causing the CI system to register a failure. When the CI system sees a failure, it will not allow the merge of data which is not adherent to schema.
2) Use it in a pipeline. Say you have YAML structured data which defines the configuration for network devices. You can run schema enforcer as part of a pipeline and run it before automation tooling (Ansible, Python…etc) consumes this data in order to render configurations for devices. If the data isn’t schema valid, the pipeline will fail before rendering configurations and pushing them to devices (or exploding with a stack trace that takes you 30 minutes and lots of googling to troubleshoot).
3) Run it after your tooling generates structured data and prints it to a file. In this case, Schema Enforcer can act as a sanity check to ensure that your tooling is dumping correctly structured output.
Where Are We Going Next
We plan to add the support for the following features to Schema Enforcer in the future:
Validation of Nornir inventory attributes
Business logic validation
Conclusion
Have a use case for Schema Enforcer? Try it out and let us know what you think! Do you want the ability to write schema definitions in YANG or have another cool idea? We are iterating on the tool and we are open to feedback!
Does this all sound amazing? Want to know more about how Network to Code can help you do this, reach out to our sales team. If you want to help make this a reality for our clients, check out our careers page.
We use cookies on our website to give you the most relevant experience by remembering your preferences and repeat visits. By clicking “Accept”, you consent to the use of ALL the cookies. In case of sale of your personal information, you may opt out by using the link Do not sell my personal information. Privacy | Cookies
This website uses cookies to improve your experience while you navigate through the website. Out of these cookies, the cookies that are categorized as necessary are stored on your browser as they are essential for the working of basic functionalities of the website. We also use third-party cookies that help us analyze and understand how you use this website. These cookies will be stored in your browser only with your consent. You also have the option to opt-out of these cookies. But opting out of some of these cookies may have an effect on your browsing experience.
Necessary cookies are absolutely essential for the website to function properly. These cookies ensure basic functionalities and security features of the website, anonymously.
Cookie
Duration
Description
__hssc
30 minutes
HubSpot sets this cookie to keep track of sessions and to determine if HubSpot should increment the session number and timestamps in the __hstc cookie.
__hssrc
session
This cookie is set by Hubspot whenever it changes the session cookie. The __hssrc cookie set to 1 indicates that the user has restarted the browser, and if the cookie does not exist, it is assumed to be a new session.
cookielawinfo-checkbox-advertisement
1 year
Set by the GDPR Cookie Consent plugin, this cookie records the user consent for the cookies in the "Advertisement" category.
cookielawinfo-checkbox-analytics
11 months
This cookie is set by GDPR Cookie Consent plugin. The cookie is used to store the user consent for the cookies in the category "Analytics".
cookielawinfo-checkbox-functional
11 months
The cookie is set by GDPR cookie consent to record the user consent for the cookies in the category "Functional".
cookielawinfo-checkbox-necessary
11 months
This cookie is set by GDPR Cookie Consent plugin. The cookies is used to store the user consent for the cookies in the category "Necessary".
cookielawinfo-checkbox-others
11 months
This cookie is set by GDPR Cookie Consent plugin. The cookie is used to store the user consent for the cookies in the category "Other.
cookielawinfo-checkbox-performance
11 months
This cookie is set by GDPR Cookie Consent plugin. The cookie is used to store the user consent for the cookies in the category "Performance".
CookieLawInfoConsent
1 year
CookieYes sets this cookie to record the default button state of the corresponding category and the status of CCPA. It works only in coordination with the primary cookie.
viewed_cookie_policy
11 months
The cookie is set by the GDPR Cookie Consent plugin and is used to store whether or not user has consented to the use of cookies. It does not store any personal data.
Functional cookies help to perform certain functionalities like sharing the content of the website on social media platforms, collect feedbacks, and other third-party features.
Cookie
Duration
Description
__cf_bm
30 minutes
Cloudflare set the cookie to support Cloudflare Bot Management.
li_gc
5 months 27 days
Linkedin set this cookie for storing visitor's consent regarding using cookies for non-essential purposes.
lidc
1 day
LinkedIn sets the lidc cookie to facilitate data center selection.
UserMatchHistory
1 month
LinkedIn sets this cookie for LinkedIn Ads ID syncing.
Performance cookies are used to understand and analyze the key performance indexes of the website which helps in delivering a better user experience for the visitors.
Analytical cookies are used to understand how visitors interact with the website. These cookies help provide information on metrics the number of visitors, bounce rate, traffic source, etc.
Cookie
Duration
Description
__hstc
5 months 27 days
Hubspot set this main cookie for tracking visitors. It contains the domain, initial timestamp (first visit), last timestamp (last visit), current timestamp (this visit), and session number (increments for each subsequent session).
_ga
1 year 1 month 4 days
Google Analytics sets this cookie to calculate visitor, session and campaign data and track site usage for the site's analytics report. The cookie stores information anonymously and assigns a randomly generated number to recognise unique visitors.
_gat_gtag_UA_*
1 minute
Google Analytics sets this cookie to store a unique user ID.
_gid
1 day
Google Analytics sets this cookie to store information on how visitors use a website while also creating an analytics report of the website's performance. Some of the collected data includes the number of visitors, their source, and the pages they visit anonymously.
AnalyticsSyncHistory
1 month
Linkedin set this cookie to store information about the time a sync took place with the lms_analytics cookie.
CONSENT
2 years
YouTube sets this cookie via embedded YouTube videos and registers anonymous statistical data.
hubspotutk
5 months 27 days
HubSpot sets this cookie to keep track of the visitors to the website. This cookie is passed to HubSpot on form submission and used when deduplicating contacts.
ln_or
1 day
Linkedin sets this cookie to registers statistical data on users' behaviour on the website for internal analytics.
Advertisement cookies are used to provide visitors with relevant ads and marketing campaigns. These cookies track visitors across websites and collect information to provide customized ads.
Cookie
Duration
Description
bcookie
1 year
LinkedIn sets this cookie from LinkedIn share buttons and ad tags to recognize browser IDs.
bscookie
1 year
LinkedIn sets this cookie to store performed actions on the website.
li_sugr
3 months
LinkedIn sets this cookie to collect user behaviour data to optimise the website and make advertisements on the website more relevant.
VISITOR_INFO1_LIVE
5 months 27 days
YouTube sets this cookie to measure bandwidth, determining whether the user gets the new or old player interface.
YSC
session
Youtube sets this cookie to track the views of embedded videos on Youtube pages.
yt-remote-connected-devices
never
YouTube sets this cookie to store the user's video preferences using embedded YouTube videos.
yt-remote-device-id
never
YouTube sets this cookie to store the user's video preferences using embedded YouTube videos.
yt.innertube::nextId
never
YouTube sets this cookie to register a unique ID to store data on what videos from YouTube the user has seen.
yt.innertube::requests
never
YouTube sets this cookie to register a unique ID to store data on what videos from YouTube the user has seen.