Sometimes you need to use a filename in your code. While hard-coding a filename can be easy it can make the long term maintenance of the code a little more challenging.
One use case, but certainly not the only, is when you want to write something to a log file for debugging purposes. In this case you may want to easily associate the name of the log file with the name of the originating file.
There are a few ways you can get the name of the file.
FILE
__FILE__ is a special token that returns the name of the file where it is. This works a lot like the similar in the C preprocessor. It substitutes the name of the current file.
As an example lets presume that a Perl script is written named file.pl. Inside of this script we call a subroutine called logit within a module called file.pm.
In the file.pm the logit module performs a simple print.
$ perl file.pl
file.pl # <- __FILE__ when called from file.pl
file.pm # <- __FILE__ when called from file.pm
$
Pros
The token will automatically update if you change the name of the file.
It returns the name of the file where it occurs.
Cons
If you place within a subroutine of a Perl Module then you will get the name of the PM, not the file where you call the subroutine. (I suppose this could equally be a pro)
Variable $0
$0 is a special variable in Perl. It contains the name of the program being executed. If we use the same example as above but slightly modify the file.pl we can see how this differs.
$ perl file.pl
file.pl # <- $0 when called from file.pl
file.pl # <- $0 when called from file.pm
$
The output from the Perl script and the package are now both the same.
Pros
The value will automatically update to the value of the command being executed.
Returns the command being executed regardless of the actual file it is located in.
Cons
If you place the within a subroutine of a Perl Module then you will get the name original command executed, not the file where you call the subroutine. (I suppose this could equally be a pro).
When using HttpClient.SendAsync() to access a resource that might take a while to process it can be advantageous to detect when the connection has been disconnected.
This can be done using a CancellationToken passed to the HttpClient.SendAsync() method.
public class HttpRequestHandler : IHttpRequestHandler
{
public CancellationToken CancellationToken => _cancel;
public HttpRequestHandler(IHttpContextAccessor httpContext)
{
_cancel = httpContext?.HttpContext?.RequestAborted ?? CancellationToken.None;
}
/// <summary>
/// Execute request and return response.
/// </summary>
public virtual async Task<IResponse<T>> ExecuteAsync<T>(IRequest request) where T : class, new()
{
// ...
var response = await this.GetClient(request).SendAsync(request1, completionOption, CancellationToken).ConfigureAwait(false);
// ...
}
}
Anytime a connection to a long running process is made there is the risk that the connection will be terminated prior to the process completing. A Cancellation Token is used to notify a process that a request should be cancelled.
As part of the connection to an ASP.NET Core Controller there is the HttpContext. This provdes access the a CancellationToken.
Those that are using Jenzabar CX and JICS for their student information system (SIS) know that there are a lot of communications from the JICS system to the CX system to perform a variety of functionality. This functionality is performed using a variety of what are referred to as web services. In reality they are just a series of Perl and Java programs that receive information from the JICS server and return information back to it.
What if you want to write your own web service that you can utilize from JICS or some other external system? Building a basic Perl web services is simple and can make accessing information in the CX database pretty simple.
This post will show you how to build a simple echo web service that will echo back some information about your connection.
Before we get started we need to make sure we have all the tools needed to build and test our echo web service. The good news is that all of the tools are tools you should already have.
Tools
There are few tools you will want to have to help implement and test your echo web service.
Putty
Favorite editor
Postman
I use Putty to access my CX environment. As long as you have access to a CX environment where you can build and install your Perl web service you are ready to go.
You will need an editor. This can be whatever your favorite editor is. An IDE like Visual Studio Code is nice because it does syntax highlighting and can help you be more productive. However, using vim or emacs will work too. As long as wherever you create your file allows you to save it to the CX environment.
I also use Postman to test my service. Postman is a nice tool because it allows you to quickly and easily send requests and repeat them. This isn't a necessary tool, but it sure does help. You could easily use curl or a browser to test as well.
With all of the tools we need to build we can now create a basic skeleton web service.
Building a skeleton
The Jenzabar CX environment uses make to control a lot of the installation. When we build our web service we want to make sure that we adhere to some basic practices to make integrating our custom web service easier. This starts with the directory structure.
Location
All of the source code for web services that are on the system are housed under the web/cgi directory structure. Under this location there are a variety of sub-directories. These sub-directories are there to help organize the various web services into logical places. Since we are building a custom web service we are going to place it in the web/cgi/services/custom directory. You can place your web service under any directory that works for you.
Create a new file
The first step in the process is to a new file. To ensure that the file is integrated into the existing structure we need to execute the following commands.
$ cd web/cgi/services/custom
/opt/cars/pomona/web/cgi/services/custom
Directory: /opt/cars/pomona/web/cgi/services/custom
$ make add F=echo.pl
>>Command: add
>echo.pl - empty file.
Updating '.makevar.mak'
$
We now have an empty file called echo.pl in the current directory. This file has been added to the system to enable management via the Jenzabar make scripts. This includes source control and installation.
The shell
The first line of the perl file needs to tell the system where to find the perl executable. Normally you would add something like #!/usr/local/bin/perl and that would be it. However, in the CX world you don't do this. Instead you let the CX make scripts take care of figuring out where the Perl executable is.
Open the echo.pl file in your editor and add #!!PERL_SHELL to the first line of the file.
Version tracking
All of the code that is managed by the make scripts use the Revision Control System (RCS) to maintain versioning information. We want to stick with Jenzabar standards and maintain this code in RCS as well. We are going to add a basic RCS header to the file to allow this information to be included in the file.
#Revision Information (Automatically maintained by 'make' - DON'T CHANGE)
#-------------------------------------------------------------------------
#$Header$
#-------------------------------------------------------------------------
The key line here is the #$Header$. When the code is checked in and out of RCS it will expand the Header keyword to include relevant information.
Common Libraries
In order to leverage existing code we are going to need to include some perl modules. These are common modules that you will use whenever creating a Perl webservice.
use lib 'WEBSRV_LIBPATH';
use CX::Service;
The WEBSRV_LIBPATH is a special line. This is similar to the PERL_SHELL in that the make scripts will translate this to the relevant path at install time. The CX:Service is a standard module provided by Jenzabar. It provides common functionality when working with a web service on CX.
Exiting
Once we have the basic libraries included we need to create a way to get information back to the system that is calling the webervice. This is done by using some routines built into the CX::Service module.
To utilize the CX::Service we need to call the new method. This will create a reference to a CX::Service object for us to use in our code.
Once we have a reference we can now return information to the system and exit. The first thing we need to do is set an exit code for JICS to read. This is done by calling the jics_code subroutine in our CX::Service.
This will set a header for JICS to inspect when it gets the response. It is important to do this first or you will not be able to set it properly.
Now that we have set the return code we now need to print the results of our service. This is done by calling the print_results() subroutine in CX::Service.
Finally we exit the script.
my $srv = new CX::Service;
$srv->jics_code(1);
$srv->print_results();
exit(0);
1;
We now have a very basic skeleton that will receive our request and return a response.
Test Skeleton
Now that we have a very basic skeleton we want to make sure that everything is working properly.
Install service
Before we can call the service we will need to install it using the standard make scripts. This can be done with make tinstall F=echo.pl.
Postman Test
Now that we have the service installed we can test it in Postman. Put the URL of your echo.pl service into the address line of Postman and click Send. The response will return a 200 OK. If you get a 404 Not Found return then you did not install it correctly. You may need to add the WEBSESRVER=LIVE to your make tinstall command.
If we inspect the response in Postman we can see that the body doesn't contain anything. Inspecting the headers reveals that the Content-Length is zero. We can also see a Jics-result-code header with a value of 1. This is the header that is set when you call the $srv->jics_code(1) in your service.
Doing Something
We now have a very basic skeleton web service that will run and return a status code. This isn't very useful for any purpose other than verifying that you are able to call the web service.
Echo the JICS User
Often we need to know the JICS user so that the web service can take an appropriate action. To determine the user we can use a subroutine from our existing CX::Service reference.
Open the echo.pl and add the following line.
my $srv = new CX::Service;
my $jics_user = $srv->get_jics_user;
$srv->push_buffer($jics_user . "\n");
$srv->jics_code(1);
This will return the value of the JICS user that the web service is operating on behalf of and write it to a buffer that will later be used to return to the caller.
Before we can test our changes we need to do one more thing. We need to add a header to our request. Open your Postman request and add the following header to the request. JICS_USER. Assign a value of your choice to the header.
If we now run the request again we will see that the value of the JICS_USER header is returned in the body of the response.
Other Useful Things
Some other basic elements of a request that we may want to be able to inspect or use are parameters sent as part of the request, the request method, a request header or the request URI. These can all be easily obtained using CX::Service methods.
Request Header
If we want the value of a specific request header we can obtain this by calling the http() method. This is actually a CGI.pm subroutine. The CX::Service module extends this for us and allows us to access this method from our existing $srv reference.
If we wanted to get the User-Agent header we call $srv->http('User-Agent');.
Request Method
If you want to know what type request was made you can also inspect that as a standard part of the CGI request. Since this information is not part of a header we need to look at an environment variable to get it.
We can determine the REQUEST_METHOD by accessing the environment variable. $ENV{'REQUEST_METHOD'}
Url Parameters
Getting the value of a Url parameter is just as easy. One of the nice features is we don't need to really worry about the type of request that was sent. If it is a GET, POST, PUT or any other type of request the parameters will be provided to use using the same mechanism.
The parameters can be accessed using the Vars() subroutine. $srv->Vars() will return all of the parameters to in a single call and place them into a hash.
QUERY_STRING and REQUEST_URI
Similarly if you wanted to inspect the query string or the request uri you would use standard methods for accessing those. Specifically you would access them via $ENV{'QUERY_STRING'} and $ENV{'REQUEST_URI'}.
Query String With POST
If you send a POST request the content that is returned by the Vars() method is going to obtain them from the body of the POST request. If you have a need to pass a url parameter in the query string of a POST request then you will not be able to get that information from this method. You could just get the value of the QUERY_STRING and then parse out the value you want. This will work but is not necessary. Instead you can call the $srv->url_param() method. This will inspect the query string for you and extract the value of the key you request.
Echo All The Info
Now that we have obtained several key pieces of information we can echo it back out. This is as simple as just adding it to the buffer using the push_buffer method. After pushing the various elements that we have obtained to the buffer we can make another request via Postman.
Our final echo.pl code looks something like this.
#!PERL_SHELL
#
#Revision Information (Automatically maintained by 'make' - DON'T CHANGE)
#-------------------------------------------------------------------------
#$Header$
#-------------------------------------------------------------------------
use lib 'WEBSRV_LIBPATH';
use CX::Service;
my $srv = new CX::Service;
my $jics_user = $srv->get_jics_user;
my $outBuffer = '';
$srv->push_buffer($ENV{'REQUEST_METHOD'});
$srv->push_buffer(" $ENV{REQUEST_URI} \n");
$srv->push_buffer($srv->http('User-Agent'). "\n");
$outBuffer .= $jics_user;
$srv->push_buffer($outBuffer . "\n\n");
my %args = $srv->Vars();
while(my($key, $value) = each %args) {
$srv->push_buffer("$key=$value\n");
}
$srv->push_buffer("param1=" . $srv->url_param('Param1') . "\n");
$srv->jics_code(1);
$srv->print_results();
exit(0);
1;
After installing this using our make tinstall command we make the request via Postman and we can see each piece of information has been returned in the body of the response.
Conclusion
You now have a basic Perl web service that you can use on your CX system.
When writing an asp.net core application you often need to be able to perform some operations in a development environment that you would not perform in production. This could be for the purposes of debugging or for performance. It could also be that you need to be able to test various features or provide new features in development that you do not want in the release or production environment.
The dotnet core environment makes it easy to determine the current environment and has some built in environments that you can use. Two of these are the Development environment and the Production environment. As you can guess, the Development environment is for when you are running your code in a development environment. If the current environment cannot be determined then it will default to Production.
When running an asp.net site in IIS you may want to set the environment to Development for a specific site. If you want the entire server environment to run in Development then you can add a system environment variable. The name of the variable to set is ASPNETCORE_ENVIRONMENT and you simply set the value to, you guessed it, Development.
Setting the variable for one particular site on your IIS to run in this mode is easier and less disruptive.
Find Your Site
Start by opening the IIS Manager. Then select the site for which you want to modify the environment variable. This will bring up the panel of options on the right which includes the "Configuration Editor" under "Management" section. Click on this to open it.
Once open select the system.webServer/aspNetCore section and then select the environmentVariables item from the list. On the right will be a small button with an ellipsis, click on that to open an additional window called the Collection Editor for editing the variables.
Add the variable
In the new Collection Editor window click the "Add" button on the right side under "Actions". This will add a new environment variable to be set. In the "Properties" section there is a "name" and a "value" row. In the "name" row set the name of the variable to ASPNETCORE_ENVIRONMENT. For the value set it to Development. You can set the variable to any value. However, the dot net core environment will only recognized a few pre-defined values. If you choose a non-standard value you will need to look for it specifically in your code.
After setting the name and value close the Collection Editor window. Now click the Apply button under the "Actions" section to apply your changes.
Restart The Site
When making a change to the environment variable for the site you are really just putting an entry into the local web.config. When you save the changes to the web.config this triggers IIS to reload the web.config. This means that you do not need to do anything special to get the new variable to take effect.
If for some reason you are not seeing the results you expect you can restart the Application Pool that the site is in to trigger the reload.