What Is gRPC? – Part 2

Blog Detail

This blog will build on top of what was discussed in Part 1 of this series. If you have not, I highly recommend checking it out here. In it, I discuss Protocol Buffers, which are an integral part of gRPC. In this blog post, we’ll build a simple gRPC client/server setup so we can actually see the definition files in action. In order to get to a fully working gRPC client/server, we need to take the following steps:

  • Create a service in our Protocol Buffer
  • Create a request method within our service
  • Create a response in our Protocol Buffer

Extending Our Protocol Buffer Definition File

Let’s go over the three additions we need to make to our Protocol Buffer definition file (the service, request, and response portions).

Adding the Service

Currently, our Protocol Buffer definition should look like this:

syntax = "proto2";

package tutorial;

message Interface {
  required string name = 1
  optional string description = 2
  optional int32 speed = 3
}

Let’s add the service block to our definition file.

service InterfaceService {
}

This line defines a service, named InterfaceService, which our gRPC server will be offering. Within this service block, we can add the methods that the gRPC client can call.

Adding the Request and Response Methods

Before we add the request and response methods, I need to discuss the different types of messages gRPC services can handle. Four basic implementations of gRPC request and response methods can be used:

  1. Unary – This is similar to REST. The client sends a request, and the server sends a single response message.
  2. Server Streaming – The client sends a message, and the server responds with a stream of messages.
  3. Client Streaming – The client sends a stream of messages, and the server responds with a single message.
  4. Bidirectional Streaming – Both the client and server send streams of messages.

Each method has its pros and cons. For the sake of keeping this blog short, I’ll be implementing a unary request/response gRPC service.

You can read official documentation on gRPC message types here.

Let’s add a unary request/response type to our InterfaceService in our definition file.

service InterfaceService {
  rpc StandardizeInterfaceDescription(Interface) returns (Interface) {}
}

We are defining an RPC remote method named StandardizeInterfaceDescription that takes in data that adheres to our Interface message type we defined in the first blog post. We also define that the method will return data that adheres to our Interface message type.

You can have a gRPC function take in and return different message types.

Now, our Protocol Buffer definition file titled interface.proto should look like this.

syntax = "proto2";


service Interface {
  rpc StandardizeInterfaceDescription(Interface) returns (Interface) {}
}

package tutorial;

message Interface {
  required string name = 1;
  optional string description = 2;
  optional int32 speed = 3;
}

Re-creating Our Python gRPC Files

Now that we have our updated interface.proto definition file, we need to recompile to update our auto-generated gRPC Python code. Here we will be using the grpcio-tools library rather than the Protocol Buffer compiler we used in the first blog post. The grpcio-tools library is the more all encompassing tool compared to the Protcol Buffer compiler. To install the grpcio-tools library, run the command pip install grpcio-tools. Then, making sure you are in the same directory as the interface.proto file, run the command python -m grpc_tools.protoc -I . --python_out=. --grpc_python_out=. interface.proto.

The -I designates the proto_path or, in other words, the path to the destination for your proto definition file. The last argument is the name of your proto file.

After running this command, you should have two new files named interface_pb2.py and interface_pb2_grpc.py. The first, is the protobuf class code and the second is the gRPC code related to our service. Here is a snapshot of the current directory structure:

├── interface_pb2_grpc.py
├── interface_pb2.py
├── interface.proto.py

Creating the gRPC Server

Now let’s create the code needed for our gRPC server. Create a new file at the root of the directory we are working in and name it grpc_server.py. Copy the below code snippet into that file:

from concurrent import futures

import grpc
import interface_pb2
import interface_pb2_grpc


class InterfaceGrpc(interface_pb2_grpc.InterfaceServiceServicer):
    def StandardizeInterfaceDescription(self, request):
        standard_description = request.description.upper()
        return interface_pb2.Interface(
            name=request.name, description=standard_description, speed=request.speed
        )


def server():
    server = grpc.server(futures.ThreadPoolExecutor(max_workers=2))
    interface_pb2_grpc.add_InterfaceServiceServicer_to_server(InterfaceGrpc(), server)
    server.add_insecure_port("[::]:5001")
    print("Starting gRPC Server")
    server.start()
    server.wait_for_termination()


if __name__ == "__main__":
    server()

Let’s take a quick look at this file. At the top we are importing a number of things. First, from the concurrent library, we are importing futures. This just allows us to asynchrously execute callables. We also import the grpc library. Lastly, we import the two files we created earlier.

At the bottom of the file, our entry point into this file is the server() function, which does four main things:

  1. Creates a gRPC server from the grpc library
  2. Registers our Interface Service to the newly created gRPC server
  3. Adds a port that gRPC server will listen on
  4. Starts the server

Near the top of the file, we are extending the InterfaceServiceServicer from our interface_pb2_grpc file. Within this class is where we define the logic for the functions we created stubs for in our .proto file. Functions created in this class take a single required argument other than self listed below:

  • request – This is the interface message type sent into our gRPC function.

The next few lines take the data passed in by the request argument, capitalizes the description, creates an interface message type and sends it back to the client. That’s the gRPC server code. Let’s quickly put together a new file for our gRPC client.

Creating the gRPC Client

Creating the gRPC client is pretty straightforward. Copy and paste the below code snippet into a file called grpc_client.py.

import grpc
import interface_pb2
import interface_pb2_grpc


def client():
    channel = grpc.insecure_channel("localhost:5001")
    grpc_stub = interface_pb2_grpc.InterfaceServiceStub(channel)
    interface = interface_pb2.Interface(
        name="GigabitEthernet0/1", description="Port to DMZ firewall", speed=20
    )
    retrieved_interface = stub.StandardizeInterfaceDescription(interface)
    print(retrieved_interface)


if __name__ == "__main__":
    client()

The client function does the following:

  1. Creates an insecure channel with a gRPC server running on your localhost on port 5001.
  2. Using the channel created in step 1, subscribes to the InterfaceService service we defined in our .proto file.
  3. Creates an Interface message object to pass into the gRPC call.
  4. Calls the StandardizeInterfaceDescription function call and passes in the interface object from step 3.
  5. Lastly, prints out what was received from the gRPC call.

Running the Client and Server Together

Now, let’s run the client and server together so we can see this code in action! First, open a terminal in the directory where the grpc_server.py lives and run python3 grpc_server.py. You should have no prompt and get the line Starting gRPC Server.

Open another terminal in the same location and run python3 grpc_client.py.

~/repo/Sandbox/blog_grpc ❯ python3 grpc_client.py

name: "GigabitEthernet0/1"
description: "PORT TO DMZ FIREWALL"
speed: 20

You can run the client side over and over again. The server will stay up and respond to however many requests it receives.

If everything was successful, you will get the response shown above. As you can see, it capitalized our entire description just like we wanted. I would definitely suggest playing around with this simple implementation of a gRPC server and client. You can even start to include non-unary functions in your gRPC server to explore more in-depth.


Conclusion

In this blog post, we updated our existing .proto definition file with our InterfaceService and added a StandardizeInterfaceDescription function within that service. We also used the grpc_tools library to generate the needed code to create our own gRPC server. Lastly, we created a small gRPC client to show our gRPC server in action. Hopefully, you now have a deeper understanding of what gRPC is and how it works. Initially, I wanted to explore gRPC in the networking world in this blog post. However, I thought it important to continue to look at gRPC a little more in-depth. In Part 3 of this series, I will be discussing where gRPC is within the networking world. We will review on the more established names, such as Cisco, Arista, and Juniper and look at how they are using gRPC and how they are enabling Network Engineers to use gRPC for their automation.

-Adam


Tags :

ntc img
ntc img

Contact Us to Learn More

Share details about yourself & someone from our team will reach out to you ASAP!

What Is gRPC? – Part 1

Blog Detail

Gathering information programmatically is a core component of network automation. We constantly have the need to get data from devices, metric appliances, automation orchestration servers, etc. We, more often than not, use REST (representational state transfer) APIs (application programming interfaces) to gather the information we want. In the first blog post of this series, I want to introduce gRPC. I will discuss what gRPC is at a high level and dive deeper into one of its main components, Protocol Buffers.

What Is gRPC?

Let’s break down gRPC into some of its main components so we can see what gRPC is doing under the hood.

Remote Procedure Call (RPC)

To begin to understand gRPC, we need to ignore the ‘g’ for a moment and quickly go over what RPC is. RPC allows a program hosted on one machine to call a subroutine (i.e., a function) on a remote machine without it knowing the function being called is remote. RPC uses a client-server model where the server hosts and executes a particular subroutine and the client calls it. Its main uses are in distributed systems where it may make sense to host a particular subroutine that is used by many on an RPC server so it can be called by RPC clients.

Remote Procedure Call

gRPC

So, what is gRPC? gRPC is an open-source high-performance Remote Procedure Call (RPC) framework. gRPC was released by Google in 2016. (The g surprisingly does not stand for Google.) The meaning of g changes every release) that brings RPC into the modern world. Some key features of gRPC include:

  • Implementation of Protocol Buffers as the IDL (Interface Definition Language), which allows gRPC to be language agnostic
  • Use of HTTP/2
  • Bidirectional streaming and flow control
  • Authentication

Procotol Buffers

Now that we have an understanding of what gRPC is, let’s dive into Protocol Buffers. Protocol Buffers are an open-source cross-platform data format used to serialize structured data. Protocol Buffers were developed by Google and, as mentioned above, are tightly coupled with gRPC. How exactly does gRPC use Protocol Buffers? Protocol Buffers are used as the Interface Definition Language (IDL) for gRPC. The IDL defines both the services a gRPC server provides as well as structure of payload messages. For the remainder of this blog post, I want to go through creating and using a Protocol Buffers file so that in the next blog, when we use it with gRPC, there is a better understanding of what is taking place.

Defining a “.proto” File

Protocol Buffers (or protobufs for short) use “.proto” files to describe the data structure you wish to serialize. Once you have your .proto file created, you can use the Protocol Buffers compiler to easily create code in a number of languages. Let’s pretend we want to be able to serialize a data structure that modeled a switch interface. Our data model will require the interface name but optionally take the interface description and speed. Create an interface.proto file in a directory of your choosing with the file contents shown below.

Protocol Buffer syntax is very similar to C++. However, it is its own syntax. Information can be found here.

syntax = "proto2";

package tutorial;

message Interface {
  required string name = 1
  optional string description = 2
  optional int32 speed = 3
}

Let’s dive into this short .proto file. The first line says which version of protobuf we are using. In this case, it’s version 2. The next line defines a package namespace. You can disregard this line when using Protocol Buffers with Python. If you are using C++, Java, or Go, you can read how the package specifier interacts with those languages here.

The next line is the start of our message. A “message” in protobuf is nothing more than an aggregate containing typed fields. Within our Interface message you can see three data types are defined. Two strings, one for the interface name and another for the interface description, and one integer for the speed. The “ = 1”, “ = 2”, and “ = 3” at the end of each line identify a unique “tag” that field will use in the binary encoding. The details of that process don’t need to be understood to utilize protobufs, but in-depth documentation can be found here.

Compiling the “.proto” File

Now that we have our .proto file, we can use the Protocol Buffer compiler to create Python source code. First, make sure you have the Protocol Buffer compiler installed. Next, in the same location as your interface.proto file, run the command protoc --python_out=. interface.proto. This will generate an interface_pb2.py. For the sake of brevity, I will not post the contents of that file here, but definitely take a brief look at it.

Using the Python Source Code

Now that we have a .py file, let’s look at what we can do with it. Start a Python shell and run through the commands below.

>>> import interface_pb2
>>> sw_interface = interface_pb2.Interface()
>>> sw_interface.name = "GigabitEthernet0/1"
>>> sw_interface.description = "SDWAN Interface"
>>> sw_interface.name
GigabitEthernet0/1
>>> sw_interface.description
SDWAN Interface

So far, it doesn’t seem like anything too special. However, our Interface class isn’t a generic Python object. Because it was generated from a .proto file, it came with some extras! First off, you can’t add data to a field not defined in the .proto file.

<span role="button" tabindex="0" data-code=">>> sw_interface.duplex = "full" Traceback (most recent call last): File "<stdin>", line 1, in
>>> sw_interface.duplex = "full"
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
AttributeError: 'Interface' object has no attribute 'duplex'

You also can’t assign an incorrect type.

<span role="button" tabindex="0" data-code=">>> sw_interface.speed = "1000" Traceback (most recent call last): File "<stdin>", line 1, in
>>> sw_interface.speed = "1000"
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
TypeError: '1000' has type str, but expected one of: int, long

Notice how I’m using a string here rather than an integer

If we run a dir() on our object, we can see some other helper methods.

>>> dir(sw_interface)
['ByteSize', 'Clear', 'ClearExtension', 'ClearField', 'CopyFrom', 'DESCRIPTOR', 'DiscardUnknownFields', 'Extensions', 'FindInitializationErrors', 'FromString', 'HasExtension', 'HasField', 'IsInitialized', 'ListFields', 'MergeFrom', 'MergeFromString', 'ParseFromString', 'RegisterExtension', 'SerializePartialToString', 'SerializeToString', 'SetInParent', 'UnknownFields', 'WhichOneof', '_CheckCalledFromGeneratedFile', '_SetListener', '__class__', '__deepcopy__', '__delattr__', '__dir__', '__doc__', '__eq__', '__format__', '__ge__', '__getattribute__', '__getstate__', '__gt__', '__hash__', '__init__', '__init_subclass__', '__le__', '__lt__', '__module__', '__ne__', '__new__', '__reduce__', '__reduce_ex__', '__repr__', '__setattr__', '__setstate__', '__sizeof__', '__slots__', '__str__', '__subclasshook__', '__unicode__', '_extensions_by_name', '_extensions_by_number', 'description', 'name', 'speed']

You can see we get quite a few helper methods that we didn’t code ourselves. The main one I’d like to draw attention to is the SerializeToString. Let’s run that on our object and see what we get.

>>> sw_interface.SerializeToString()
b'\n\x12GigabitEthernet0/1\x12\x0fSDWAN Interface'

This function is important. The byte string produced from the SerializeToString method is what is passed over the network when we start looking at how Protocol Buffers are used in gRPC.


Conclusion

When I originally thought to do a blog on gRPC, I thought I’d be able to fit it all in one blog post. As you can see, touching on only one of the major parts of gRPC, Protocol Buffers, took some time. Hopefully, you now have a high-level understanding of what both RPC and gRPC are as well as a deeper understanding of what Protocol Buffers are. In the next blog post I will show how to create a very basic gRPC server and client in Python and some examples of using the gRPC client on a Cisco NX-OS device. Thanks for reading!

-Adam


Tags :

ntc img
ntc img

Contact Us to Learn More

Share details about yourself & someone from our team will reach out to you ASAP!

Software Testing Types

Blog Detail

Testing your code is a key component of the Software Development Lifecycle. If you are like me and came from a networking background, you may not be aware of the sheer number of types of tests that exist. In this blog post I will give a high-level overview and some key characteristics of the most common types of tests we use, or that are being introduced, or that we plan to use in the Nautobot ecosystem.

Unit Tests

Unit tests are by far the most commonly implemented. Unit tests test a specific section or part of code. More often than not, the “part” of code is a function. One example would be creating a test to ensure that your function that converts MAC addresses in the format aaaa.bbbb.cccc to aa:bb:cc:dd:ee:ff works as intended.

A great example of that exact unit test can be found in my prior blog here.

Some characteristics of unit tests include:

  • Quickest of tests to run – Unit tests should be written so that they take very little time to run.
  • Provide specific feedback – Because unit tests test a small section of code, feedback is typically very precise.
  • Easy to write – Out of the many types of tests, unit tests are often the easiest to write becasue they deal with a small section of code.
  • Does not interact with dependencies – Unit tests should test only the piece of code they are focused on. They should not interact with a web server, database, etc.
  • Should be able to run simultaneously – Because unit tests have no real dependencies, they can, and should, be run in parallel.

Real-world unit tests can be found in pyntc repository. Note the use of both mock and patch to ensure that these tests do not have any dependencies.

Integration Tests

Integration tests are also very common. As the name suggests, the main purpose of integration tests is to test the integration between separate modules of a given application or program. An example of an integration test can be found in Nautobot here. This function is testing the integration between the web UI and the back end to ensure that when someone logs in, the log-in is successful. Another example more related to the network world would be if the tests found here in pyntc used an actual device rather than a “mock”. You could then call these integration tests since they have a dependency (the switch) and rely on it for their tests.

Some characteristics of integration tests include the following:

  • Typically use the real dependency – Integration tests more often than not test using an actual dependency, e.g., database, switch, web server, etc.
  • Difficult to write – Compared to unit tests, integration can be much harder to write as now you have to account for interactions between modules.
  • Can be time-consuming – Because integration tests typically use real dependencies, the tests take longer to run. You may have to wait for an API call to return data or for an HTTP server to start.
  • May not be able to be run in parallel – Because integration tests often depend on other modules or code, they are typically run in succession rather than in parallel.

Regression Tests

Regression testing is more of a methodology than a specific test encompassing a particular part of a program or application. The idea is to to test all parts of your code whenever a change is made, regardless of whether the change affected that part of code. Because regression testing is more a methodology than testing a particular piece of code, both the aforementioned unit and integration tests can be considered regression tests to some extent. Let me give you an example. I recently opened a pull request to add an “Export” button to the Nautobot ChatOps project. When I created that pull request, the CI/CD pipeline process ran through all of the existing unit and integration tests to ensure that functionality of the plugin was not broken from the code I added. I also needed to add tests for the code I added, which later on could be considered regression tests for the next person who wants to add a feature to the plugin.

Some characteristics of regression testing include:

  • Time-consuming – Regression testing typically means running the whole test suite even when only a small part of code may have changed.
  • Repetitive task – The same tests need to be run over and over again whenever changes to the code are made.
  • New tests for code changes – As new features or bug fixes are introduced into a project, tests need to be created to account for that.

Load Tests

The purpose of load tests is to ensure that your application can handle the amount of users, connections, and interactions it will receive in a production environment. While there are currently no official load tests in the Nautobot repo, we do plan on adding them using the Python library Locust. An example test we may have could be having 100 concurrent users hit the landing page of Nautobot to see how it handles it. With that load test we could look at loading times of the page and how long any interactions with the database took. If we increased that 100 users to 1,000 users we could run our test again and see how Nautobot handles that.

Some characteristics of load testing include:

  • Can be, but not necessarily are, stress tests – Stress testing is typically done with the intent to reach a point of failure. Load tests can result in a failure but that is not part of the goal.
  • Can be difficult to account for all types of configurations – Customer X may run your application on a 2-core processor and client Y may run your application on an 8-core processor. When load testing, you need to account for the different types of hardware, software, and security configurations on a given machine.

User Acceptance Tests

User acceptance tests are some of the last tests performed on an application. They differ from the aforementioned tests because, while the previous tests can easily be done programmatically, user acceptance tests take more work to automate. The goal of these tests is to ensure that the created software meets the goals of the customer/end user who will be using the application. Many times there can be a disconnect between what the developer creates and what the end user needs. Here at NTC we definitely take advantage of user acceptance tests quite often. If we are working in a professional services agreement, we are always getting feedback from the customer. If we are developing an open-source plugin, many hands here at NTC touch it and give feedback before it is released. Whil automating user acceptance tests can be more difficult than unit tests, one great library that provides a good framework is Selenium. It provides a programmatic way to interact with web browsers. This allows us to create reproducable and traceable tests to ensure we are meeting the customers’ needs.

When embarking on a user acceptance test journey, you may want to keep these things in mind as guidelines for developing good tests.

  • Define the scope – What exact features are you testing?
  • Constraints and assumptions – Before starting the tests, what are some assumptions and constraints? For example, are we only able to test on Windows 11 and not 10? Or maybe we can test only on a Linux system and not Windows.
  • Risks – This can include things such as incomplete testing environments and components.
  • Roles and responsibilities – Ideally you have multiple people doing user acceptance tests. You need to define what group (or individual) does what tests.
  • Create the script for your tests – Define each step a user will take for a given test and document it properly.

Conclusion

Software testing is a huge subject. I’ve only briefly introduced you to some of the tests that exist out there. Hopefully this has intrigued you enough to take a little bit of time and do some research on the many other types that exist.

-Adam


Tags :

ntc img
ntc img

Contact Us to Learn More

Share details about yourself & someone from our team will reach out to you ASAP!