Why Pluralsight is important for my career

 

Recently my development team was approved to receive Pluralsight subscriptions. I’ve been an advocate of Pluralsight’s training since joining the team and am pleased to see the company support my coworker’s training and education.

Having been a subscriber of Pluralsight for several years, I was asked to give a demo of how to use the site.

Before giving a user’s demo and showing people how to use Pluralsight effectively, I took the opportunity to give a short talk to my coworkers about career development and why Pluralsight is important to me for helping my career development.

 

Why Pluralsight is important for my career

First, let me give a big Kudos to Karth and Mike for working to make Pluralsight subscriptions available to our team.

For a few minutes I’m going to talk about what career development means to me, talk about my career, which I hope will give insight to how I see Pluralsight relating to career development and how Pluralsight can help you in your career and job performance.

As many of you know I’ve been a proponent of online training and of Pluralsight in particular. I’ve subscribed to Pluralsight for three years. So, as one may say, I put my money where my mouth is. I have subscribed (and do) subscribe to other online training sites. But, hands down, Pluralsight is the best in its class.

One of the foundations to be successful in your job and in your career—note that I differentiate the two, your job is not necessarily your career—one of the foundations to having a successful career is having good skill sets.

Pluralsight is about building your career and helping you become successful at your job.

So why have I paid for my own subscription the past three years? What makes Pluralsight valuable for me? As you understand why I find Pluralsight valuable, I hope some of the lessons I’ve learned will give you consideration and thought of how Pluralsight can help you.

To answer that question of value, let me say that I have a perspective of career which likely differs from most of you.

I’ve now reached the 40 year milestone of writing software. I really did get started writing software using punch cards. I started on the university mainframe, writing Fortran IV programs for electrical engineering projects.

In my career, I’ve had many jobs. From writing PL1, APL, and 370 assembler at IBM, to assembler on a variety of microprocessors at NCR, to OS work on DOS, OS/2, Windows, Unix, and working on large framework development at NCR and Microsoft.

What have I learned in 40 years about career development?

One thing is: Your employer owns your job, you own your career.

Your career is your passion, what motivates you to get up in the morning, what you find interesting to work on, what roles you like to be in. Your job is the assignments given by management and business to get a product or project developed. When your career interests align with your job assignments, life is good. When they don’t, life sucks.

YOU usually initiate and follow a career change. Your employer initiates and directs job changes.

When I think about career and job, I ponder these three questions for myself:

Who are you?
What do you want?
Where are you going?

I hated my first job at IBM. As the project progressed, I realized I did not like the location, the work environment, the company culture, and felt my job assignments were out of alignment of where the industry was moving and where my interests lay. I needed to make a career change.

I quit IBM to join NCR in the Point of Sale division. The jobs and projects at NCR aligned with my career interests and goals: operating system work, communications protocol, distributed computing. I enjoyed the travel and the team-lead responsibilities.

But remember, the company owns your job, and there may come a day when:

The avalanche has already started- it is too late for the pebbles to vote.

What does that mean? After a decade at NCR, decisions were made by management 3 levels above me to close the facility and relocate to Atlanta Ga.

Because of my skillsets and past work, I was offered a choice to relocate to Columbia, SC to a newly formed office products division, or to relocate to Atlanta Ga to the POS division, or go my own way and become sucked into an apocalyptic economic black hole that sucked the life blood from a community as the major employer of a small town closed operations.

My career choice was to do different work on different operating systems and application development. My career choice opened job opportunities I would never had if I stayed in the POS division. And those job opportunities opened new career choices in later years.

At the end of the 90’s, the .DOT com boom was in full swing, and NCR was going in directions I no longer cared to follow. My passion was Windows. NCR’s interests lay in trying to enter customer relationship management software, developing software in Java and Unix. NCR had many stumbles, and experienced much turmoil.

NCR job assignments and business directions were totally mismatched with my career goals.

After 20 years with NCR, I jumped and joined a .dotcom startup in Columbia, developing eCommerce web sites and doing performance work.

But, in two years: The avalanche has already started- it is too late for the pebbles to vote.

The dotcom boom was turning to the dotcom bust. The startup company burned through its venture capital. Came into work one morning, was told “Dan, you can’t work here anymore.” Three months later, the entire company collapsed.

But, my prior career choices, and the kind of work I was passionate about, and the acquired skillsets in the dotcom startup company helped me land a job with Microsoft in Findlay.

But, always remember: You own your career and make decisions to facilitate getting into positions you want in your career, The company owns your job.

I loved my career and job in Microsoft. But, in 2004, The avalanche has already started- it is too late for the pebbles to vote.

Microsoft executives three levels above decided to consolidate operations in Redmond and close the Findlay facility.

My career goals were aligned with the job Microsoft was offering in relocating to Redmond. My personal and life goals were not. In this case, my personal and life goals overruled the career goals.

So what are some lessons I learned in 40 years?

Job skills matter.

Career is knowing what you want and acquiring the job skills for those goals.

Luck is sometimes said to being in the right place at the right time.

But luck is almost always coupled with having the correct skillset to fulfill your employer’s needs.

Sometimes the position a person wants is gotten through backdoor channels—of being friends with, or being related to, or by sleeping with someone higher in the organization making the hiring and promotion decisions. (I say this seriously because I’ve seen it happen several times.)

But, thankfully, most of the time “luck” is being at the right place at the right time, and having the right skill sets.

And if luck doesn’t happen for you, if you’ve maintained marketable skillsets, you can make your luck happen somewhere else.

As it’s been said, “If you can’t change your job, change your job.”

Early in my career, about everything I needed to know could be found an a few books: Assembler, The C Programming Language by Kernigan and Ritchie, and a few protocol specifications for bisync, tcp/ip, and X.25.

Today, the skill sets needed to do our jobs as software developers are much wider and numerous.

Want to party with Windows client technology? You’ll need skills in Winforms, WPF / XAML, WCF, Debugging tools.

Want to party with Web technologies? Get skilled in JavaScript, JavaScript libraries such as Bootstrap, Angular, MVC, and of course, HTML5.

Work with a database? Usually the developer has a dual role of being the database administrator, of making decisions of well written table schemas and stored procedures. Of course, ensuring performance doesn’t suck. Oh, and you probably have to know something about ORMs.

Are you going to deploy to cloud?

What are your security concerns? Encryption, Cross Site Scripting vulnerabilities, etc. etc.

Netscape founder and Venture Capitalist Marc Andreesen wrote in 2011,

“Software is steadily eating the world, disrupting and transforming industries like music, retail, and more. In 1999 there were only 50 million internet users, compared to 2 billion PC users and 5 billion phone based internet users today.

And keep in mind, Andreesen said that three years ago. Certainly, those figures have significantly increased.

So I pose the three questions to you again, in context for your career.

Who are you? Meaning, What are you passionate about? Development? Testing? Infrastructure? Client UI? Web UI? Server back end? Database? Security? Performance? Mobile computing? Gaming?

What do you want? Meaning, What do you want out of your career in the next 30 to 40 years?

A 8 – 5 job? Leadership and influence? Development? Project Management? Architect? Well known author / speaker? Influencer in your organization? Do you like to travel? Work in a R&D company, a “D” company, or be a contractor? Do you want to be a well known consultant? (like Michael Collier)

Where are you going? Meaning, do you plan to stay in your job? What happens when your job changes due to technology, market, or economic changes—what would you like to do? Where do you want to rise in the organization? What kind of role do you want in the organization?

And biased from my past experience,

When, not if, an avalanche starts, will you be in a position to cope or will you become buried?

I see several emerging trends in the software industry. Pluralsight helps me to be better at my current job, and it helps me learn more and be more prepared to work in a world changed by those evolving trends in the next five to ten years.

The big trends I see are

Cloud computing; mobile / tablet based business computing, distributed computing (Moore’s law is being reached and we are now increasing scalability by scaling out to multiple servers and using enterprise software patterns in messaging, and implementing patterns like CQRS.)

Security will play a role of increasing importance, as more security breaches happen, and more user data and money flow on the internet.

We are becoming a surveillance state: Not meant in the context of NSA—that’s a different topic—what I mean is companies are collecting more and more data about us.

For example, retailers are using motion sensors (similar to Kinect) in stores to track customer interest in items, seeing what items customers pick up, inspect, and put back on the shelf, vs. put in their shopping cart. Retailers want to know what you considered buying, but didn’t. Companies are collecting data on purchasing patterns, collecting data for marketing purposes on viewed TV shows, which more easily done now that more people are moving to internet provided viewing, vs. off the air antenna viewing, as was predominate 30 years ago.), and then there is social media usage and trends, etc. etc. And consider the commercial use of drones and automotive—the software needed to navigate. Awesomeness for software developers.

As more and more data is collected, Big Data and data analytics will become more in demand: Skill sets in cloud computing with massive database server farms, using technology stacks like Hadoop, NoSQL, document databases like MongoDB, and software patterns like Map / Reduce will increase in usage and demand.

What are your career goals? Each individual needs to reach into themselves and give thoughtful consideration on what they enjoy doing, how to be more effective in what they do, and to, perhaps, widen their minds in the possibilities of how their career can develop.

I like Pluralsight because it helps inform me, it broadens my perspective of the possible, it inspires me;

If you only listen to the courses, you are short changing yourself. You become informed, to a degree. You can improve on a job skill.

But, I encourage you to read the course comments to see what others have to say about the course, and often you’ll find additional resources related to the topic.

Try to learn what you can about some of the authors. Read their blogs to drill deeper into information that they are expert in. Learn what you can about their career paths, to see the possibilities which you may want to pursue for yourself.

Knowing their career paths, seeing what they are doing in the industry, can be inspirational to the possibilities. What you learn about how others have managed their careers may blow you away.

Several authors I personally know. Talked with at different user group events– CodeMash, CodeStock, regional Days of Dotnet, and StirTrek. Several authors I’m familiar with their work, going back to early internet days and exchanging messages with them on the COM / DCOM mailing lists, and taking their courses when they were instructors at company called Developmentor.

It’s inspiring to see how people who were developers 25 years ago are now co owners of successful startup companies. It’s inspiring to see how someone (Iris Classon) who was a dietician three years ago is now a well known speaker, blogger, and has taken on high visibility jobs at Telerik and GalaSoft.

David Starr, who I met attending his half day session on Agile process development, three years ago at Codemash, has gone from doing training like that, to working at DevDiv in Microsoft on the tooling team, to now, recently announced, to become the Chief Operating Officer at Scrum.Org.

Let me close this discussion with the following point:

Seldom in life do you get something for free.

As my father told me, and I told my son, you get 12 years of free education. The motivation of how you use it must come from within. What you learn needs to be in line with your passions, goals, and dreams. You’ll never get 12 years of free education again in your life. How you take advantage of that free education is up to you. No one can force you to study.

You can use that free education to the max to become the best you can be to help drive where you want to go in life. Or you can waste that free gift, and likely regret that that decision for many years

And such it is with Pluralsight.

You have been given a freebie.

Maybe you’re career path hasn’t been in your thoughts much. Maybe it doesn’t matter to you. But if it does, I encourage you to try out your subscription, explore the courses and learn.

My career is important enough to pay for my own subscription, buy books, and attend conferences and network with other software developers.

Hopefully, your career development is important enough for you to take advantage of a free subscription.

And hopefully, I haven’t sounded too preachy or shouted too loudly from the soap box. My goal was to give you pause and thoughtful consideration.

Many of you have 30 to 40 years ahead of you in your career. Consider and give thoughtful consideration to what you like, where you want to go, and the possibilities. Think about how staying current and broadening your skills will help get you where you want to be.

And in doing that, our team will improve, better meet business goals, and be less likely to face some future avalanche event.

I’m 110% certain the software industry will transform in incredible ways in the next three to four decades—significantly more than it has in my career.

How will those transformations unfold?

I can speculate, but honestly can’t say I care as much as you should care– My path will be different than yours.

In five to ten years I’ll be retired.   In thirty to forty years, I’ll be dead.

And with all that to think about, let’s move to exploring Pluralsight!!

Posted in Career | 1 Comment

Instrumenting Your Application to Measure its Performance Part 7 Injecting EventSource into WCF Service Calls

 

The last article demonstrated how to use Unity Interception to inject code to emit ETW traces before and after calling a method. Using this technique is a good way to separate tracing / logging concerns from a class method and avoids interlacing your main code with trace statements.

You can find the source code to this article in my GitHub repository  BizGear

In this article, following the philosophy of Aspect Oriented Programming, I’ll show you how to write custom WCF behaviors to inject ETW traces into WCF service calls. Similar to doing AOP using Unity Injection, using custom WCF behaviors separates the ETW tracing from your main code and avoids interlacing your code with trace statements.

There are additional benefits too besides separation of concerns.

The first benefit is in a SOA architecture it is a good practice to generate ETW trace statements at system boundaries.

 

image

 

On the client, the client boundary is the event handler for some UI action. At the server, the boundary is at the WCF service call.  It is here where the WCF subsystem calls into your service’s method and returns data to the client.

Of course, besides tracing at the boundaries, you may want additional tracing to measure the performance of business logic or CRUD operations on the database if the timings at the system boundaries suggest deeper investigation.

In a large system, it’s easy to start having so many trace statements generated that filtering and interpreting the data becomes difficult. By starting at system boundaries, you have fewer needles and a smaller haystack to search.

A second benefit of using WCF custom behaviors is using WCF behaviors gives you access to information you can find very useful in measuring performance. Besides the timings to perform a WCF call, you can easily get how much data is sent between the client and server, take a snapshot of memory (to watch if memory usage is ever increasing in your service host process), and obtain the identity of the caller accessing the WCF service call.


The WCF Service Stack

WCF has multiple extension points where you can insert custom code which allows you to modify runtime behavior. The three extension points I use the following custom behaviors: Service, Operation, Parameter Inspector, and Message Inspector behaviors. These behaviors can be added programmatically or by adding attributes to the WCF service contract.

 

image

Figure 1

 

The service behavior, not shown in Figure 1, is used to apply Operation behaviors.

You use the operation behavior, which is associated with the WCF service call (the operation contract in WCF), to add a Parameter Inspector behavior.

The Parameter Inspector behavior is called before and after the WCF method executes. In the Parameter Inspector, you can obtain the identity of the user placing the call, access the parameters passed to the WCF method, and trace the WCF method’s execution time.

In the Message Inspector, you can get the size of the data messages transferred between the client and WCF service.

When you combine the traces done in the Parameter and Message Inspectors, you can quickly see:

  • Which clients are making calls to the service
  • How much time a method call takes
  • Memory utilization
  • How much data is exchanged between the client and server

As you collect this data over time, you can build a profile of your application via the WCF service calls. Profiling the ‘application meta data’ you can answer questions such as:

  • Who are the most frequent users of the server
  • Which service methods are most frequently used
  • Which service methods take the most time to complete
  • How much data is sent by methods, by user
  • What are heaviest and lightest server loads:
    • The number of concurrent users,
    • Times of week or day of peak use.

To understand what your application is doing, how it’s being used, and its affects on the hosting operating system – CPU utilization, memory usage, garbage collection— with ETW tracing you have the ability to combine your application’s ‘meta data’ with other information to get a holistic picture.   You don’t need to capture / read the messages exchanged between the client and server.  (A tip of the hat to Edward Snowden pointing out the importance of meta data collection.)

With the profile data, you have good starting points to identify server side hot spots and determine where to drill deeper for more granular performance data.

For code reuse and flexibility in allowing behaviors to be applied programmatically or by attributes, I created a class for each of the behaviors, which derive from Attribute.

Given the above overview, let’s start examining some code.

 


Service Behavior

As stated earlier, there are different ways to apply behaviors to the WCF runtime. One approach is to programmatically add the behaviors when creating the service host. Another way is to decorate your service class with an attribute. I’ll show the way of applying a behavior via an attribute first.

The Sales Service interface

ISalesService
  1. [ServiceContract]
  2. public interface ISalesService
  3. {
  4.     [OperationContract]
  5.     //[EtwLoggingOperationBehavior(Context.Sales)]
  6.     IList<CustomerDto> GetAllCustomers();
  7. }

                               Figure 2

 

 

SalesService
  1. [EtwLoggingServiceBehavior(Context.Sales)]
  2. public class SalesService : ISalesService
  3. {
  4.     //private readonly IEtwLogger _etwLogger;
  5.     private readonly ILogStrategy _logStrategy;
  6.     private readonly ISalesRepository _salesRespository;
  7.  
  8.  
  9.     /* Default constructor needed when not using the Unit.Wcf library to inject
  10.      * dependent objects via IInstanceProvider
  11.      */
  12.  
  13.     public SalesService()
  14.     {
  15.         _logStrategy         = new SalesLogStrategy();
  16.         _salesRespository    = new SalesRepository();
  17.     }
  18.  
  19.     public IList<CustomerDto> GetAllCustomers()
  20.     {
  21.         return _salesRespository.GetAllCustomers();
  22.     }
  23. }

Figure 3

 

The attribute on line 16 of Figure 3 attaches the EtwLoggingServiceBehavior to SalesService.

 

EtwLoggingServiceBehavior
  1. [AttributeUsage(AttributeTargets.All, AllowMultiple = false)]
  2. public class EtwLoggingServiceBehavior : Attribute, IServiceBehavior
  3. {
  4.     private readonly Context _context;
  5.     private readonly IEtwLogger _etwLogger;
  6.  
  7.     public EtwLoggingServiceBehavior(Context context)
  8.     {
  9.         _context = context;
  10.  
  11.         _etwLogger = EtwLoggingFactory.CreateLogger(context);
  12.     }
  13.     
  14.     public void ApplyDispatchBehavior(ServiceDescription serviceDescription, ServiceHostBase serviceHostBase)
  15.     {
  16.         foreach (var endpoint in serviceDescription.Endpoints)
  17.         {
  18.             endpoint.Behaviors.Add(new EtwLoggingEnpointBehavior(_etwLogger));
  19.  
  20.             foreach (var operation in endpoint.Contract.Operations)
  21.             {
  22.                 var bFound = operation.Behaviors.Any(opBehavior => opBehavior.GetType().Name == "EtwLoggingOperationBehavior");
  23.  
  24.                 if (bFound) continue;
  25.  
  26.                 var behavior = new EtwLoggingOperationBehavior(_context);
  27.                 operation.Behaviors.Add(behavior);
  28.             }
  29.         }
  30.     }

Figure 4

 

At runtime, the WCF subsystem will construct the EtwLoggingServiceBehavior, passing Context.Sales to its constructor. (Line 22) The constructor calls a class factory to get an instance of the Etw logger. The specific type of logger the class factory returns depends on the on the context passed to the factory. ( Line 26)

After constructing the Service behavior, the WCF subsystem calls ApplyDispatchBehavior. (Lines 29 – 44)

ApplyDispatchBehavior iterates through the collection of endpoints (line 31), constructs a new EtwLoggingServiceBehavior and adds that behavior to the endpoints behavior collection. (Line 33)

Then it constructs a new EtwLoggingOperationBehavior, and adds the operation behavior to the operation behavior. (Lines 35 – 42).

Because there can be different ways operations are added to the service, such as being added by an attribute on a specific operation, (Figure 2, Line 18) a check is made to determine if a EtwLoggingServiceBehavior is already in the operation behaviors collection (Lines 37 – 39).

As seen in Figure 2, Line 17, there is one Operation on this Service: GetAllCustomers(). For each operation found, there should be one and only one EtwLoggingServiceBehavior added to the Operation collection. (Line 42)

 




Endpoint Behavior

Use the endpoint behavior to add a Message Inspector into the WCF pipeline.

The endpoint behavior can be instantiated at both the client and server sides of the WCF stack. I won’t discuss here in detail, but you may notice in the code sample the endpoint behavior on the client is creating a custom message.

What’s happening here, is this is one way you can create a ‘back channel’ of information from the client to the server, passing a unique identifier in the message from the client to the server. When the client uses ETW to log to a database and the server does the same, you can use the unique identifier to match the log messages made by the client and server. The topic of describing creating the back channel from the client and reading that data in the server’s WCF stack I’ll leave for a future article.

 

EtwLoggingEndpointBehavior
  1. public class EtwLoggingEnpointBehavior : IEndpointBehavior
  2.     {
  3.         private readonly IEtwLogger _etwLogger;
  4.         private readonly string _clientCookie;
  5.  
  6.         public EtwLoggingEnpointBehavior(IEtwLogger etwLogger)
  7.         {
  8.             _etwLogger = etwLogger;
  9.             _clientCookie = string.Empty;
  10.         }
  11.  
  12.         public EtwLoggingEnpointBehavior(IEtwLogger etwLogger, string clientCookie)
  13.         {
  14.             _etwLogger = etwLogger;
  15.             _clientCookie = clientCookie;
  16.         }
  17.         
  18.         public void ApplyClientBehavior(ServiceEndpoint endpoint, ClientRuntime clientRuntime)
  19.         {
  20.             var customMessage = new CustomClientMessageInspector(_clientCookie);
  21.             clientRuntime.MessageInspectors.Add(customMessage);
  22.         }
  23.  
  24.         public void ApplyDispatchBehavior(ServiceEndpoint endpoint, EndpointDispatcher endpointDispatcher)
  25.         {
  26.             var inspector = new EtwLoggingMessageInspector(_etwLogger);
  27.             endpointDispatcher.DispatchRuntime.MessageInspectors.Add(inspector);
  28.         }

                                                                 Figure 5

 

The Service behavior (Figure 4) gets the collection of endpoints for the service and adds a EtwLoggingEndpointBehavior to each endpoint. (Figure 4, Lines 31-33).

In ApplyDispatchBehavior (Figure 5, Line 35-39), the endpoint behavior gets the collection of Message Inspectors and adds a EtwLoggingMessageInspector. (Figure 5, Line 38).

 




Message Inspector Behavior

EtwLoggingMessageInspector
  1. public class EtwLoggingMessageInspector : IDispatchMessageInspector
  2.     {
  3.         private readonly IEtwLogger _etwLogger;
  4.  
  5.         public EtwLoggingMessageInspector(IEtwLogger etwLogger)
  6.         {
  7.             _etwLogger = etwLogger;
  8.         }
  9.  
  10.         public object AfterReceiveRequest(ref Message request, IClientChannel channel, InstanceContext instanceContext)
  11.         {
  12.             if (!_etwLogger.IsEnabled()) return null;
  13.  
  14.             var etwCookie = string.Empty;
  15.  
  16.             var headerIndex = request.Headers.FindHeader(CustomEtwLoggingHeader.Name,
  17.                                                          CustomEtwLoggingHeader.NameSpace, "");
  18.             if (headerIndex > -1)
  19.             {
  20.                 var content = request.Headers.GetHeader<XmlNode[]>(headerIndex);
  21.                 etwCookie = content[0].InnerText;
  22.             }
  23.             var msg = request.ToString();
  24.  
  25.             if (msg == null) return null;
  26.  
  27.             var totalMemory = GC.GetTotalMemory(false);
  28.  
  29.               var etwMsg = string.Format("EtwCookie: {0}MsgLen: {1} TotalMemory {2}", etwCookie, msg.Length, totalMemory);
  30.  
  31.             _etwLogger.Service("EtwLoggingMessageInspector", "AfterReceiveRequest", etwMsg);
  32.  
  33.             return null;
  34.         }
  35.  
  36.         public void BeforeSendReply(ref Message reply, object correlationState)
  37.         {
  38.             if (!_etwLogger.IsEnabled()) return;
  39.  
  40.             var msg = reply.ToString();
  41.  
  42.             if (msg == null) return;
  43.  
  44.             var msgLen = msg.Length;
  45.  
  46.             _etwLogger.Service("EtwLoggingMessageInspector", "BeforeSendReply", msgLen.ToString(CultureInfo.InvariantCulture));
  47.         }
  48.     }

Figure 6

In the code above, extracts a ‘cookie’ from the client.   Ignore this code:  as stated earlier, the code for passing custom data from the client within the header needs a separate article.

The message inspection behavior allows you to access the messages exchanged between the client and server. (Figure 1).

AfterReceiveRequest (Lines 24 – 47) are called by the WCF runtime when receiving data from the client.

Line 26 is check to determine if ETW logging is enabled. If it’s not, the method returns immediately, incurring minimum performance impact on the WCF service call.

A WCF service must have a host process- a windows service, a console application, or IIS. I’ve experienced problems with some services where memory continues to grow through the day and the service host needs to be shut down and the WCF services restarted.  

Line 41 takes a snapshot of the current managed memory when any operation (WCF service call) is made on the service. By capturing and monitoring, you can use this as a starting point, like a canary in a coal mine, to warn you that something (business logic, repository, etc.) is consuming memory. Collecting this data on each service will help you focus your investigation if your host process’s memory size is growing.

Line 43 creates a message containing the memory and size of the message received from the client.

Line 45 passes the message to the logger.

As a side note, I want to point out, although it’s not obvious in the code discussed, you should consider using the keyword meta data on the _etwLogger.Service() method, or use a separate method, such as _etwLogger.ServiceMessageInspector() to allow logging of specific behaviors in the WCF service stack.

Often you don’t want to capture and log data at the Message Inspector level, and using specific keywords or methods for the WCF call stack, you can more granularly control what data you collect.

BeforeSendReply() (Figure 6, Line 50), is called before the WCF runtime transmits data to the client. The implementation here is similar to AfterReceiveRequest().

As you see, you can use the Message Inspector behavior to get a profile of the amount of data exchanged between the client and server.

 



Operation Behavior

EtwLoggingOperationBehavior
  1. [AttributeUsage(AttributeTargets.All, AllowMultiple = false)]
  2.     public class EtwLoggingOperationBehavior : Attribute, IOperationBehavior
  3.     {
  4.         private readonly IEtwLogger _etwLogger;
  5.  
  6.         public EtwLoggingOperationBehavior(Context context)
  7.         {
  8.             _etwLogger = EtwLoggingFactory.CreateLogger(context);
  9.         }
  10.         
  11.         public void ApplyDispatchBehavior(OperationDescription operationDescription,
  12.                                           DispatchOperation dispatchOperation)
  13.         {
  14.             var serviceName = dispatchOperation.Parent.Type.Name;
  15.             var inspector = new EtwLoggingParamaterInspector(serviceName, _etwLogger);
  16.  
  17.             dispatchOperation.ParameterInspectors.Add(inspector);
  18.         }

 

Figure 7

Compared to the other behaviors, the Operation behavior is straight forward.

The EtwLoggingOperationBehavior’s purpose is to create a EtwLoggingParameterInspector behavior and add that behavior to the operation’s ParameterInspector collection. (Figure 7, Lines 28 – 31).

 

 


Parameter Behavior

EtwLoggingParameterInspector
  1. public class EtwLoggingParamaterInspector : IParameterInspector
  2.     {
  3.         private readonly string _service;
  4.         private readonly IEtwLogger _etwLogger;
  5.  
  6.         public EtwLoggingParamaterInspector(string service, IEtwLogger etwLogger)
  7.         {
  8.             _service = service;
  9.             _etwLogger = etwLogger;
  10.         }
  11.  
  12.         void IParameterInspector.AfterCall(string operationName, object[] outputs,
  13.                                             object returnValue, object correlationState)
  14.         {
  15.             //_etwLogger.Service(_service, operationName, "after call");
  16.         }
  17.  
  18.         object IParameterInspector.BeforeCall(string operationName, object[] inputs)
  19.         {
  20.             var currentPrincipal = new WindowsPrincipal(ServiceSecurityContext.Current.WindowsIdentity);
  21.  
  22.             var name = currentPrincipal.Identity.Name;
  23.  
  24.             var msg = string.Format("BeforeCall Name {0}", name);
  25.  
  26.             _etwLogger.Service(_service, operationName, msg);
  27.  
  28.             return null;
  29.         }

Figure 8

 

The Parameter Inspector allows you to access the parameters just before the WCF method is performed, and access them just after it returns, and before the parameters are passed down the WCF call stack to be packaged into a message, serialized, and returned to the client.

In a system configured to allow only logged in domain users to access methods on the WCF service, the Parameter Inspector is a good spot where you can get the user’s identity. (Figure 8, Lines 33 – 35).

If important for your analysis, you could access the parameters from the input array, and log the parameter values.

Line 39 logs the name of the WCF service, the operation performed on the service, and the user making the WCF call.




Example Log output

This is the left panel of PerfView, selecting to view the Service output from the ETW trace.

image

 

Below is the right panel showing the traces made by the Service behaviors.    Because the image is small and difficult to read, I typed the text for the three rows in the grid.

The first message is
className=EtwLoggingMessageInspector” methodName=“AfterReceiveRequest” data=”EtwCookie: “  MsgLen: 828  TotalMemory 1220164”

The second message is
className=”SalesService” methodName=”GetAllCustomers” data=”BeforeCall  Name  DANSURFACE\Dan”

The third message is
className=”EtwLoggingMessageInspector”  methodName=”BeforeSendReply”   data=”934”

image

 


Summary

Injecting WCF behaviors into the WCF call stack can provide a lot of information about the behavior of your application, how it’s used, provide baseline performance during development and allow real time monitoring within a production system.

In the next articles, I’ll explore how to enable the capturing of ETW events within your client application. By capturing events within your application, you can create an administration screen where you can monitor the application’s behavior, or you could send the ETW events to another system using a messaging system or use the new Semantic Log Application Block (SLAB) to send ETW events to a file or database.

Posted in ETW | Tagged | 2 Comments

Instrumenting Your Application to Measure its Performance Part 6 Injecting EventSource Using AOP

 

In the last article, you learned how to wrap your code with a wrapper / envelope class contained in a using statement, which quickly instruments a block of code with enter / exit traces.

You can find the source code to this article in my GitHub repository  BizGear

So far in this series, instrumenting your code means adding, or ‘sprinkling’ trace statements through your code. A drawback to this technique is code can quickly become littered and adding the trace statements becomes something you need to manually do.

Even though the last article demonstrated an easier route, there is another technique shown in this article to instrument your code without ‘sprinkling’ trace statements through your methods. This technique is Aspect Oriented Programming, or AOP.

Wikipdia defines AOP

“In computing, AOP is a programming paradigm that aims to increase modularity by allowing the separation of cross-cutting concerns.”

The power of AOP is you can decorate interfaces with attributes which define code to call before entering a method and after exiting a method. The trace statements executed, or ‘injected’ into the method remains separate from the method implementation.

There are several popular AOP frameworks. Because I use the Unity IOC container in my day job, I will use Unity’s AOP capabilities to demonstrated injecting ETW trace statements.

I recommend the following articles as two starting points for more information about Unity Interception:

Aspect-Oriented Programming, Interception and Unity 2.0 by Dino Esposito, MSDN Magazine, December 2010,

Interception with Unity MSDN


Overview

The following picture is taken from the Unity Interception documentation on MSDN.

image

What this diagram shows is you that need to register your object in the Unity container. When you (the client in the diagram) ask the container to instantiate an object, the container returns a proxy to your object, not the actual object. The proxy references a Behaviors Pipeline, which you write, and which is called as you (the client) makes calls to the target object.

The behaviors in the pipeline can be chained together. In the example used in this article, there is only one behavior. You can write the behavior so it is called before the Target is invoked as well as after the target returns. The implementation of the behavior performs the ETW tracing.

 


First Steps

The BizGear application implements a Business Rule called CanReturnMerchandise. This business rule is a bare bones implementation, called through the WCF service and merely returns a true value when called from the WPF client when the user clicks the Return button on the Merchandise tab:

image

In a real world application, the rule implementation would likely call a repository to access data tables and implement business rule logic using data read from a database.

To demonstrated Unity Interception, this implementation is good enough.

CanReturnMerchndiseRule
  1. publicclassCanReturnMerchandiseRule : ICanReturnMerchandiseRule
  2. {
  3.     [Performance]
  4.     publicbool CanReturnMerchandise(Product product, SalesOrder salesOrder)
  5.     {
  6.         return ValidateProductIsReturnable(product) && ValidateReturnTimePeriod(product, salesOrder);
  7.     }
  8.     //Validate product is not discontinued or purchased “as is”
  9.     privatebool ValidateProductIsReturnable(Product product)
  10.     {
  11.         returntrue;
  12.     }
  13.     privatebool ValidateReturnTimePeriod(Product product, SalesOrder salesOrder)
  14.     {
  15.         returntrue;
  16.     }
  17. }

Figure 1

 

This is the ICanReturnMerchandiseRule interface

ICanReturnMerchandiseRule
  1. publicinterfaceICanReturnMerchandiseRule
  2. {
  3.     bool CanReturnMerchandise(Product product, SalesOrder salesOrder);
  4. }

 

Figure 2

 

What we first do is register the CanReturnMerchandiseRule via its interface to the Unity container and tell Unity that it should perform Interception when it instantiates (the client requests the container to resolve) CanReturnMerchandiseRule.

To register the interface and request Unity to perform interception, you register like this:

RegistrationMoudule
  1. publicclassRegistrationModule : IRegistrationModule
  2. {
  3.     publicvoid Register(IUnityContainer container)
  4.     {
  5.         container.AddNewExtension<Interception>();
  6.         container.RegisterType<ICanReturnMerchandiseRule, CanReturnMerchandiseRule>()
  7.         .Configure<Interception>()
  8.         .SetInterceptorFor<ICanReturnMerchandiseRule>(newInterfaceInterceptor());
  9.     }
  10. }

Figure 3

Line 20 configures unity to create a CanReturnMerchandiseRule instance when resolving an object using the ICanReturnMerchandiseRule interface.

Line 21 and 22 configures Unity to perform Interception.

You may have noticed the [Performance] attribute in line 7, Figure 1, which decorates the CanReturnMerchandise method, of the first code example above.

Here is the implementation of the Performance attribute:

Code Snippet
  1. publicclassPerformanceAttribute : HandlerAttribute
  2. {
  3.     publicoverrideICallHandler CreateHandler(IUnityContainer container)
  4.     {
  5.         returnnewCanReturnMerchandiseRuleHandler();
  6.     }
  7. }

 

Figure 4

This attributes overrides CreateHandler, which is called by Unity Interception before performing CanReturnMerchandiseRule.CanReturnMerchandise().

All that’s left to do is provide the implementation to CanReturnMerchandiseRuleHandler:

CanReturnMerchandiseRuleHandle
  1.   classCanReturnMerchandiseRuleHandler : ICallHandler
  2.   {
  3.       publicIMethodReturn Invoke(IMethodInvocation input, GetNextHandlerDelegate getNext)
  4.       {
  5.           var productData = string.Empty;
  6.           var salesData = string.Empty;
  7.           var etwTracer =  MerchandiseReturnContextEventSource.Log;
  8.           if (etwTracer.IsEnabled())
  9.           {
  10.               var methodName = input.MethodBase.Name;
  11.               var className = input.Target.ToString();
  12.               var product = input.Inputs[0] asProduct;
  13.               if (product != null)
  14.               {
  15.                     productData = string.Format(“Product: Id {0} Description {1}, product.Id, product.Description);
  16.               }
  17.               var salesOrder = input.Inputs[1] asSalesOrder;
  18.               if (salesOrder != null)
  19.               {
  20.                   salesData = string.Format(“SalesOrder Id {0} SalesDate {1}, salesOrder.Id, salesOrder.SalesDate);
  21.               }
  22.               etwTracer.Business(className, methodName, productData + salesData);
  23.           }
  24.           // Perform the operation
  25.           var methodReturn = getNext().Invoke(input, getNext);
  26.           return methodReturn;
  27.       }
  28.       publicint Order { get; set; }
  29.   }

 

Figure 5

Line 17 creates an instance of the class to create ETW traces.

To incur as little performance impact as possible, Line 19 checks if logging is enabled before composing a log message.

Recall that CanReturnMerchandise() accepts two parameters: a Product object and a SalesOrder object. (Line 7, Figure 2)

In the code example shown in Figure 5:

Lines 21 and 22 obtain the method name (CanReturnMerchandise()) and the class name (CanReturnMerchandiseRule).

Lines 24 – 28 get a reference of the Product object and formats a message containing information identifying the Product object.

Lines 30 – 34 get a reference to the SalesOrder object and formats a message containing information identifying the SalesOrder object.

Line 36 logs a ETW message containing the class name, method name, and identifying information for the Product and SalesOrder object passed from the client.

Finally, Line 40 calls CanReturnMerchandiseRule.CanReturnMerchandise().

Line 42 returns the result (true) returned by CanReturnMerchandiseRule.CanReturnMerchandise().

The following code, in the MerchandiseReturnViewModel, is performed when the user clicks the Return button:

MerchandiseReturnExecute
  1. privatevoid MerchandiseReturnExecute()
  2. {
  3.     using (var proxy = newMerchandiseReturnClient())
  4.     {
  5.         var dto = newMerchandiseDto {ProductId = 1, SalesOrderId = 1};
  6.         var canReturn = proxy.CanReturnMerchandise(dto);
  7.         ReturnResult = “Complete”;
  8.         _etwLogger.Presentation(“MerchandiseReturnViewModel”, “MerchandiseReturnExecute”, canReturn.ToString());
  9.     }
  10. }

 

Which calls, on line 37, the server side code CanReturnMerchandiseRule.CanReturnMerchandise()

CanReturnMerchandise
  1. [Performance]
  2. publicbool CanReturnMerchandise(Product product, SalesOrder salesOrder)
  3. {
  4.     return ValidateProductIsReturnable(product) && ValidateReturnTimePeriod(product, salesOrder);
  5. }

 

Without Unity Interception, all you should see in the ETW output is the trace statement created from line 40 on the client when you run the demo on the same machine.

With Unity Interception, you should see the trace statement emitted on the client in the MerchandiseReturnViewModel and the trace statement emitted on the server by the CanReturnMerchandiseRuleHandler:

image

 

Summary

Using AOP- in this example Unity Interception- you can see how you can separate implementation from the instrumentation implemented in the handlers which Unity ‘injects’ into your code during run time.

Although a simple example, I hope the example provides enough information to build upon for real world application instrumentation.

In the next article, I’ll show you how to take the idea of AOP- code injection- and inject instrumentation into the WCF call stack.

Posted in ETW | Tagged | 1 Comment

Instrumenting Your Application to Measure its Performance Part 5 Wrapping your code with EventSource

 

In the last article, you learned how to use keywords to filter for specific events in an application. Using filtering is one way to narrow listening for events within a specific layer of a specific context in an application.

You can find the source code to this article in my GitHub repository  BizGear

As you instrument your application, you may soon find that adding event trace statements around blocks of code becomes repetitive and tedious. In this article, I’ll demonstrate a way to reduce the trace statements.

The technique described in this article works well when you want to measure the time a block of code, such as a service call, takes to run. This technique is also good for quickly checking the timing changes due to code refactoring.

 


BizGear WPF Application and WCF Service

In this article, I’ll start using a sample WPF application called BizGear. The client application connects to WCF services. BizGear is a simple WPF application using a tab control to segment three areas: Sales, Merchandise, and Administration. In order to keep the application as simple as possible, the WCF service does not access a database. Data returned from the WCF service(s) is a canned result set.

I’ll assume you have some background in WPF and WCF services.

In that these articles are about code instrumentation, the discussion is focused around specific operations in the demo application and not an overview of the entire application. As I progress through these articles, I’ll discuss the application code as needed for the article’s discussion point.

When BizGear starts it should look similar to this:

image

 

Measuring Execution Time to Get Customers

In the BizGear sample application, GetCustomersCommand.Execute() method runs when a user clicks the Get Customers button. The Execute method calls the SalesService which returns a list of customers.

The method then displays the names of the customer in the list box by adding the customers to the view model’s observable collection.

 

Execute
  1. public void Execute(object parameter)
  2. {
  3.     _viewModel.Customers.Clear();
  4.  
  5.     using (var proxy = new SalesClient())
  6.     {
  7.         var customers = proxy.GetAllCustomers();
  8.  
  9.         foreach (var customer in customers)
  10.         {
  11.             _viewModel.Customers.Add(customer);
  12.         }
  13.     }
  14. }

 

 

When you instrument your code to measure performance, a good starting point is to add instrumentation at the system boundaries.

 

image

 

In our example application, one boundary is the event handler of a user action on the client.

For this example, we want to instrument the client side code that’s executed when a user clicks the Get Customers button. As you see in the example above, the code first clears the collection, which clears the display of the customers in the ListBox control at the top of the form. The code then makes a WCF service call to the SalesService GetAllCustomers() to get a list of customers.

Another useful boundary to instrument is the WCF service running on the server. This boundary is where the client interacts with the server. For this example, the client and server are running on a single machine and there is only one client. By using filtering as discussed in the previous article, it’s straight forward to enable ETW to monitor for events which can tell us how long it takes to execute code in the client boundary, at the service boundary, or both.

In a real world environment, there may be several hundred clients using the WCF services on the server. Enabling instrumentation on the client and instrumentation on the server, and consolidating events can give a holistic view of system activity. In a future article I’ll discuss a way to instrument the service boundary as well as how to consolidate data from the client(s) and server. Here we’ll focus on instrumenting the client side code.

Another point for code instrumentation you should consider measuring is performance changes when refactoring code.

You should capture the time before and after your code refactoring to ensure there is no unexpected performance degradation. If you’re running automated unit tests, you may want to enable ETW tracing around key methods of your application to capture performance metrics during unit test.

In the example code above, you could add a call to SalesContextEventSource at the beginning of the method and a second call at the end of the method. The problem is that adding calls at the beginning and end of a method is tedious and adds more ‘noise’.

An alternative is to use an ‘envelope’ class which wraps calls to SalesContextEventSource and derives from IDisposable, allowing you to wrap the code you want to instrument inside a using statement:  (line 20) 

 

Execute with ETW Trace
  1. public void Execute(object parameter)
  2. {
  3.     using (var appLog = ApplicationLogFactory.Create(Context.Sales,
  4.                                                     Layer.Presentation,
  5.                                                     "SalesViewModel",
  6.                                                     "GetCustomersCommand"))
  7.     {
  8.         _viewModel.Customers.Clear();
  9.  
  10.         using (var proxy = new SalesClient())
  11.         {
  12.             var customers = proxy.GetAllCustomers();
  13.  
  14.             foreach (var customer in customers)
  15.             {
  16.                 _viewModel.Customers.Add(customer);
  17.             }
  18.         }
  19.     }
  20. }


Figure 1

 

The class factory (line 20) returns an ApplicationLog class. The parameters to the class factory provide the context (Sales), the filter (Layer.Presentation), and the class name and method name- SalesViewModel and GetCustomersCommand.

When the class factory creates the ApplicationLog class, the constructor makes a call to the SalesContextEventSource with the data ‘enter’. When the using statement (line 20) goes out of scope, ApplicationLog.Dispose() method is called, which calls a method to emit the data ‘exit’.

The resulting trace output seen in PerfView appears similar to the following:

image

 


Drilling into the Implementation

The following is a basic UML diagram of the classes and interfaces:

image

Figure 2

 

 

This is the ApplicationLogFactory which returns an ApplicationLog:

ApplicationLogFactory
  1. public class ApplicationLogFactory : IApplicationLogFactory
  2. {
  3.     private static readonly Dictionary<Context, ILogStrategy> Strategies = new Dictionary<Context, ILogStrategy>();
  4.  
  5.     static ApplicationLogFactory()
  6.     {
  7.         Strategies.Add(Context.Sales, new SalesLogStrategy());
  8.         Strategies.Add(Context.Business, new BusinessLogStrategy());
  9.         Strategies.Add(Context.MerchandiseReturns, new MerchandiseReturnLogStrategy());
  10.     }
  11.  
  12.     public static ApplicationLog Create(Context context, Layer layer, string className, string methodName)
  13.     {
  14.         return new ApplicationLog(Strategies[context], className, methodName, layer);
  15.     }
  16.  
  17.     ApplicationLog IApplicationLogFactory.Create(Context context, Layer layer, string className, string methodName)
  18.     {
  19.         return Create(context, layer, className, methodName);
  20.     }
  21. }

 

Figure 3

 

The ApplicationLogFactory contains a dictionary of Strategy objects following the Strategy design pattern. Each EventSource context- Sales, Business, and MerchandiseReturn, is contained in a Strategy object.

The ApplicationLogFactory’s dictionary, constructor and Create method are static, which essentially makes the ApplicationLogFactory a Singleton and global in the process.

The dictionary is made static and singleton for a quick lookup. But, to prevent incorrect data from being written in a multithreaded application, the implementation, the object from the dictionary is cloned and the appropriate class name, method name, and layer are copied to the clone.

The method Create() (Shown being called in the Execute method at line 21 in the first code example) gets the appropriate Strategy object from the dictionary, sets the layer, class name and method name, and constructs a new ApplicationLog().

ILogStrategy defines two methods for logging. Clone() constructs a new copy of the contained strategy class. Creating a new copy ensures the correct information is emitted in a multithreaded application.

ILogStrategy
  1. public interface ILogStrategy : IDisposable
  2. {
  3.     ILogStrategy Clone(string className, string methodName, Layer layer);
  4.  
  5.     void Enter();
  6.     void Log(string data);
  7. }

 

Figure 4

 

 

This is the implementation of ApplicationLog:

ApplicationLog
  1. public class ApplicationLog : ILogStrategy
  2. {
  3.     private ILogStrategy _logStrategy;
  4.  
  5.     public ApplicationLog(ILogStrategy logStrategy, string className, string methodName, Layer layer)
  6.     {
  7.         _logStrategy = logStrategy.Clone(className, methodName, layer);
  8.  
  9.         Enter();
  10.     }
  11.  
  12.     public ILogStrategy Clone(string className, string methodName, Layer layer)
  13.     {
  14.         throw new NotImplementedException();
  15.     }
  16.  
  17.     public void Dispose()
  18.     {
  19.         _logStrategy.Dispose();
  20.         _logStrategy = null;
  21.     }
  22.  
  23.     public void Enter()
  24.     {
  25.         _logStrategy.Enter();
  26.     }
  27.  
  28.     public void Log(string data)
  29.     {
  30.         _logStrategy.Log(data);
  31.     }
  32. }

 

Figure 5

 

This is the LogStrategyBase implementation:

LogStrategyBase
  1. public class LogStrategyBase : ILogStrategy
  2. {
  3.     public string ClassName { get; set; }
  4.     public string MethodName { get; set; }
  5.     public Layer Layer { get; set; }
  6.  
  7.     public virtual ILogStrategy Clone(string className, string methodName, Layer layer)
  8.     {
  9.         return null;
  10.     }
  11.  
  12.     public virtual void Dispose()
  13.     {
  14.     }
  15.  
  16.     public virtual void Enter()
  17.     {
  18.     }
  19.  
  20.     public virtual void Log(string data)
  21.     {
  22.     }
  23. }

 

Figure 6

 

LogStrategyBase implements the three properties which hold the class name, the method name and the filtering layer. The Log and Dispose methods are virtual, implemented in the derived class.

The following is the implementation for one of the Strategy classes- SalesLogStrategy:

SalesLogStrategy
  1. public class SalesLogStrategy : LogStrategyBase
  2. {
  3.     public override ILogStrategy Clone(string className, string methodName, Layer layer)
  4.     {
  5.         return new SalesLogStrategy {ClassName = className, MethodName = methodName, Layer = layer};
  6.     }
  7.  
  8.     public override void Dispose()
  9.     {
  10.         Log("exit");
  11.     }
  12.  
  13.     public override void Enter()
  14.     {
  15.         Log("enter");
  16.     }
  17.     
  18.     public override void Log(string data)
  19.     {
  20.         if (!IsEnabled()) return;
  21.  
  22.         switch (Layer)
  23.         {
  24.             case Layer.Presentation:
  25.                 SalesContextEventSource.Log.Presentation(ClassName, MethodName, data);
  26.                 break;
  27.  
  28.             case Layer.Business:
  29.                 SalesContextEventSource.Log.Business(ClassName, MethodName, data);
  30.                 break;
  31.  
  32.             case Layer.Data:
  33.                 SalesContextEventSource.Log.Data(ClassName, MethodName, data);
  34.                 break;
  35.         }
  36.     }
  37.  
  38.     private static bool IsEnabled()
  39.     {
  40.         return SalesContextEventSource.Log.IsEnabled();
  41.     }
  42. }

 

Figure 7

 

In Figure 3, line 24, the ApplicationLogFactory passes an object which implements the ILogStrategy interface to the ApplicationLog constructor.

In Figure 5, line 9, the ApplicationLog constructor calls the base class properties to save the class name, method name, and layer.

The constructor then calls Enter(), line 13. Enter() calls IStrategy.Enter(), line 29, which emits the “enter” message, Figure 7, line 17.

The Log() method, Figure 7, line 24, first checks if logging IsEnabled(). This check is an optimization to reduce calling an EventSource trace if logging isn’t enabled for the provider.

If logging is enabled, the appropriate method of the SalesContextEventSource is called to emit the “entry” message in lines 26 – 38.

When the using statement goes out of scope (Figure 1, line 37), the ILogStrategy.Dispose() is called (Figure 7, line 14) which emits “exit”

 


Conclusion

Using the above technique to wrap the ETW trace generation in a using statement makes it quick and easy to find the time a block of code executes.

As a reader exercise, in Figure 1, you can use the variable appLog created in line 21 to emit additional tracing.

For example, in the foreach loop in lines 32 – 35, you could add the line

applog.Log(customer.Firstname + “ “ + customer.Lastname);

to emit tracing of the customer names.

In the next article, I’ll show you how to use the Unity IOC container to inject trace statements so that you need not add any EventSource tracing inside a method.

Posted in ETW | Tagged | 2 Comments

Instrumenting Your Application to Measure its Performance Part 4 Using EventKeywords for Event Filtering

 

Introduction

In the last article, I demonstrated how to implement a class derived from EventSource for instrumenting a sales application called BizGear. In this article I’ll build upon that implementation to show you how to use the event keyword to filter specific areas of your application’s instrumentation.

You can find the source code to this article in my GitHub repository  BizGear

Many applications have multiple layers to separate responsibilities. In this demo, we’ll assume the application we’re instrumenting has three layers- the Presentation layer, the Business Logic layer, and the Data layer.

As discussed in previous articles, a large application often has multiple contexts of responsibility. A sales order application may have a part of the application responsible for sales processing, a part which handles merchandise returns, and a part for inventory management. Each context of responsibility spans across the multiple layers.

 

image

 

Sometimes you may want to enable event generation in all parts of your application. More often though, you will want to narrow event generation to specific areas of interest. One way to accomplish this narrowing is using the EventKeywords.

We’ll add to the EventSources project class and create a new class InventoryContextEventSource.

I want to show you is how to accomplish the following:

  • Generate events for each layer in a specific context.
    For example, capture events for the Sales context, in the vertical stack: Presentation, Business, and Data layers.

  • Filter for events on a specific layer for a specified context
    For example, capture events for the Sales context in only the Presentation Layer.
  • Filter for events on multiple contexts on different layers of the context.
    For example, capture events at the points noted by red stars:
    • Capture events for the Sales contexts in the Presentation layer and Business layers.
    • Capture events for the Inventory Context in the Business Layer
    • Capture events Merchandise Return Context in the Presentation and Data layers .

 

image

 

 


Project Updates

The first update we’ll make is extending the Keywords class in SalesContextEventSource.

Add two additional EventKeywords Business, DataAccess, and Service

Keywords
  1. public class Keywords
  2. {
  3.     public const EventKeywords Presentation = (EventKeywords)0x0001;
  4.     public const EventKeywords Business = (EventKeywords)0x0002;
  5.     public const EventKeywords DataAccess = (EventKeywords)0x0004;
  6.     public const EventKeywords Service = (EventKeywords)0x0008;
  7. }

 

Next, add two new methods to SalesContextEventSource, Business, Data, and Service. Be sure to increment the Event value for the three new methods and to match the values in the Event attribute and the WriteEvent() method. Your SalesContextEventSource class should appear similar to the following:

SalesContextEventSource
  1. //guid for this provider:  {d5b29467-62f5-54a9-4861-96cf631b95b4}
  2. [EventSource(Name = EventSourceNames.SalesContext)]
  3. public class SalesContextEventSource : EventSource
  4. {
  5.     public static readonly SalesContextEventSource Log = new SalesContextEventSource();
  6.  
  7.     [Event(1, Level = EventLevel.Informational, Keywords = Keywords.Presentation)]
  8.     public void Presentation(string className, string methodName, string data){ if (IsEnabled()) WriteEvent(1, className, methodName, data); }
  9.  
  10.     [Event(2, Level = EventLevel.Informational, Keywords = Keywords.Business)]
  11.     public void Business(string className, string methodName, string data) { if (IsEnabled()) WriteEvent(2, className, methodName, data); }
  12.  
  13.     [Event(3, Level = EventLevel.Informational, Keywords = Keywords.DataAccess)]
  14.     public void Data(string className, string methodName, string data) { if (IsEnabled()) WriteEvent(3, className, methodName, data); }
  15.  
  16.     [Event(4, Level = EventLevel.Informational, Keywords = Keywords.Service)]
  17.     public void Service(string className, string methodName, string data) { if (IsEnabled()) WriteEvent(4, className, methodName, data); }
  18.     
  19.     public class Keywords
  20.     {
  21.         public const EventKeywords Presentation = (EventKeywords)0x0001;
  22.         public const EventKeywords Business = (EventKeywords)0x0002;
  23.         public const EventKeywords DataAccess = (EventKeywords)0x0004;
  24.         public const EventKeywords Service = (EventKeywords)0x0008;
  25.     }
  26. }

 

Next, add two strings in the EventSourcesNames class for the names of the Inventory and Merchandise Returns contexts:

EventSourceNames
  1. public static class EventSourceNames
  2. {
  3.     //guid for this provider:  {d5b29467-62f5-54a9-4861-96cf631b95b4}
  4.     public const string SalesContext = "Acme-BizGear-SalesContext";
  5.  
  6.     //guid for this provider:  {9a9cf874-7496-5df5-6e80-1c5804eccd57}
  7.     public const string InventoryContext = "Acme-BizGear-InventoryContext";
  8.  
  9.     //guid for this provider:  {3e4539f0-447d-5791-0b48-ee4106c9ced8}
  10.     public const string MerchandiseReturnContext = "Acme-BizGear-MerchandiseReturnsContext";
  11. }

 

Next, add two new classes to the EventSources project: InventoryContextEventSource and MerchandiseReturnContextEventSource.

Your EventSources project should look similar to this:

image

 

At this point, the only significant difference between the event source classes is the class names. For expediency, you can copy and paste the contents of SalesContextEventSource to the InventoryContextEventSource and MerchandiseReturnContextEventSource files. Then rename the class names appropriately.

Your InventoryContextEventSource should look similar to the following:

InventoryContext
  1. //guid for this provider:  {9a9cf874-7496-5df5-6e80-1c5804eccd57}
  2. [EventSource(Name = EventSourceNames.InventoryContext)]
  3. public class InventoryContextEventSource : EventSource
  4. {
  5.     public static readonly InventoryContextEventSource Log = new InventoryContextEventSource();
  6.  
  7.     [Event(1, Level = EventLevel.Informational, Keywords = Keywords.Presentation)]
  8.     public void Presentation(string className, string methodName, string data) { if (IsEnabled()) WriteEvent(1, className, methodName, data); }
  9.  
  10.     [Event(2, Level = EventLevel.Informational, Keywords = Keywords.Business)]
  11.     public void Business(string className, string methodName, string data) { if (IsEnabled()) WriteEvent(2, className, methodName, data); }
  12.  
  13.     [Event(3, Level = EventLevel.Informational, Keywords = Keywords.DataAccess)]
  14.     public void Data(string className, string methodName, string data) { if (IsEnabled()) WriteEvent(3, className, methodName, data); }
  15.  
  16.     [Event(4, Level = EventLevel.Informational, Keywords = Keywords.Service)]
  17.     public void Service(string className, string methodName, string data) { if (IsEnabled()) WriteEvent(4, className, methodName, data); }
  18.  
  19.     public class Keywords
  20.     {
  21.         public const EventKeywords Presentation = (EventKeywords)0x0001;
  22.         public const EventKeywords Business = (EventKeywords)0x0002;
  23.         public const EventKeywords DataAccess = (EventKeywords)0x0004;
  24.         public const EventKeywords Service = (EventKeywords)0x0008;
  25.     }
  26. }

 

 


Enabling Multiple Providers

For your first test, you should verify that the three event providers – SalesContextEventSource, InventoryContextEventSource, and MerchandisReturnEventSource—will generate expected events.

First, modify the console application to call each event source:

Code Snippet
  1. private static void TestSalesLogging()
  2. {
  3.     var tstMsg1 = "111111111_123456789_123456789_123456789_123456789_123456789_123456789_123456789_123456789_123456789_";
  4.     var tstMsg2 = "123456789_123456789_123456789_123456789_123456789_123456789_123456789_123456789_123456789_123456789_";
  5.     var tstMsg3 = "123456789_123456789_123456789_123456789_123456789_123456789_123456789_123456789_123456789_123456789_";
  6.     var tstMsg4 = "123456789_123456789_123456789_123456789_123456789_123456789_123456789_123456789_123456789_123456789_";
  7.     var tstMsg5 = "123456789_123456789_123456789_123456789_123456789_123456789_123456789_123456789_123456789_123456789_";
  8.     var tstMsg6 = "123456789_123456789_123456789_123456789_123456789_123456789_123456789_123456789_123456789_123456789_";
  9.     var tstMsg = tstMsg1 + tstMsg2 + tstMsg3 + tstMsg5;
  10.  
  11.     SalesContextEventSource.Log.Presentation("Sales_Presentation_className", "Sales_presentation_MethodName", tstMsg);
  12.     SalesContextEventSource.Log.Presentation("Sales_Presentation_className", "Sales_presentation_MethodName", "Sales presentation data");
  13.     SalesContextEventSource.Log.Business("Sales_Business_ClassName", "Sales_business_MethodName", "sales business Data");
  14.     SalesContextEventSource.Log.Data("Sales_Data_ClassName", "Sales_data access_MethodName", "sales data access data");
  15. }
  16.  
  17. private static void TestBusinessLogging()
  18. {
  19.     InventoryContextEventSource.Log.Presentation("Business_Presentation_className", "Business_presentation_MethodName", "Business presentation data");
  20.     InventoryContextEventSource.Log.Business("Business_Business_ClassName", "Business_business_MethodName", "Business business Data");
  21.     InventoryContextEventSource.Log.Data("Business_Data_ClassName", "Business_data access_MethodName", "Business data access data");
  22. }
  23.  
  24. private static void TestMerchandiseLogging()
  25. {
  26.     MerchandiseReturnContextEventSource.Log.Presentation("MerchandiseReturn_Presentation_className", "MerchandiseReturn_presentation_MethodName", "MerchandiseReturn presentation data");
  27.     MerchandiseReturnContextEventSource.Log.Business("MerchandiseReturn_MerchandiseReturn_ClassName", "MerchandiseReturn_MerchandiseReturn_MethodName", "MerchandiseReturn business Data");
  28.     MerchandiseReturnContextEventSource.Log.Data("MerchandiseReturn_Data_ClassName", "MerchandiseReturn_data access_MethodName", "MerchandiseReturn data access data");
  29. }

 

Next, modify the .cmd file so Logman enables an ETW session which listens to all three event sources.

To accomplish this, create a separate text file containing the even providers and tell Logman to use the configuration file.

Create a text file called providers.txt and enter the following three lines:

“{d5b29467-62f5-54a9-4861-96cf631b95b4}”
“{9a9cf874-7496-5df5-6e80-1c5804eccd57}”
“{3e4539f0-447d-5791-0b48-ee4106c9ced8}”

Next, create a new file called CreateBizGear.cmd and enter the following line into the file:

LOGMAN create trace BizGear -o C:\ETWTraces\BizGear.etl -pf providers.txt -f bincirc -max 1000

The –pf parameter tells Logman to reference the providers.txt file.

Enter Logman /query and you should see a Data Collector Set called BizGear.

Enter Logman /query BizGear and you should see output similar to the following screen capture:

image

 

In the above screen capture, notice that BizGear is configured to listen for the three event providers, for all keywords on each event provider, and the current status is stopped.

Enter Logman /start BizGear to enable the data collector.

Run the console application

Enter Logman /stop BizGear to stop listening for events and have ETW flush its buffers to the ETL file.

Start PerfView and open the generated ETL file. You should see all the events. In the following screen capture, I highlighted the events for the InventoryContext on the left and you can see the event data on the right:

image

Left hand pane:

image

Right hand pane:

image

 


Filtering Event Generation to Specific Layers

Consider a scenario where you want to generate events for only the SalesContextEventSource, Presentation layer.

Delete the current BizGear data collection by entering Logman /Delete BizGear.

Modify the providers.txt file to contain just the following line:

“{d5b29467-62f5-54a9-4861-96cf631b95b4}” 0xFFFFFFF1

Run CreateBizGear.cmd again to create the data collection. Notice that only the SalesContext event provider is configured, and the KeywordsAny is now set to “0xFFFFFF1”. (I’ll give more details later in this article why “0xFFFFFFF1” is used.

image

Enter Logman /start BizGear to start the collection. Run the console application. Stop event capture and flush the ETW buffers to the ETL file by entering Logman /stop BizGear

Run PerfView to examine the captured events:

image

Although the console application called event generation for all providers, only the SalesContextEventSource, in the Presentation layer, has generated an event.


Filtering multiple contexts, multiple layers

In this example we want to generate events for:

  • SalesContextEventSource: Presentation and Business Layer
  • InventoryContextEventSource: Business Layer
  • MerchandiseReturnContextEventSource: Presentation and Data Layer

First, delete the current BizGear data collector.

Next, enter the following lines in the file providers.txt (without the line wrapping)

“{d5b29467-62f5-54a9-4861-96cf631b95b4}” 0xFFFFFFF3 0x04
#Sales Context Presentation and Business Layer (1 | 2), level Informational

“{9a9cf874-7496-5df5-6e80-1c5804eccd57}” 0xFFFFFFF2 0x04
#Inventory Context Business layer (2), level informational

“{3e4539f0-447d-5791-0b48-ee4106c9ced8}” 0xFFFFFFF5 0x04
#Merchandise Context Presentation and Data layer (1 | 4), level Informational

The second value on each line specifies the keywords you want to enable for event generation.

Recall the keywords are 1 = Presentation, 2 = Business and 4 = Data Access.

To enable event generation for the Presentation and Business layer, do a logical “or” of 0x1 and 0x2 (turn off the bit for Data Access layer) for a result 0x3. 0xFFFFFFF3 which will enable logging for the Presentation and Business layer.

The third parameter on the line, 0x04, is the EventLevel to capture. EventLevels are set as:

  • Log Always (0x00)
  • Critical (0x01)
  • Error (0x02)
  • Warning (0x03)
  • Informational (0x04)
  • Verbose (0x05)

In your class files, you specified the event level Informational. You’ll set the same here, which is 0x04.

(In later examples I’ll show, you may want to configure event capturing for only Critical and Error events and ignore the Informational level events)

Run Logman /query BizGear and verify the level and KeywordsAny values are correct:

image

 

Start an ETW session by entering Logman /start BizGear

Run the console application again. Then stop the data collector.

Run PerfView to examine the current event captures:

In the left hand pane you should see:

image

And in the right hand pane:

image

As desired, our event generation is limited to

  • SalesContextEventSource, Presentation and Business Layer
  • InventoryContextEventSource, Business Layer
  • MerchandiseReturnContextEventSource, Presentation and Data Layer

At this point in the series of articles you’ve how to do the following:

  • Define event sources to capture events from different parts of your application
  • Use Logman to enable capturing all event in the application
  • Use Logman to filter specific areas of concerns in specific layers of your application.

Up to this point I’ve used a console application to simulate how different parts of your application would interface to the classes derived from EventSource.

In future articles, I’ll use a simple client application, built as a WPF application, using DLLs representing the Business Layer and Data Access layer, to show different ways we may use our event classes in a real application.

Posted in ETW | Tagged | Leave a comment

Instrumenting Your Application to Measure its Performance Part 3 An introduction to EventSource, Logman, and PerfView

 

This article discusses starting points for application instrumentation.

You can find the source code to this article in my GitHub repository  BizGear

EventSource is a class introduced in NET 4.5. Although installed as part of NET 4.5, you can use EventSource in your NET 4.0 applications. If your environment has yet to migrate to NET 4.5, you can reference the EventSource.DLL in your Visual Studio project.

Logman is a utility included as part of Windows. We’ll use Logman as a controller to enable and disable event capture from your application. Logman is well documented in MSDN, Technet, and blogs. In this article, I’ll provide just enough information to get you started. I encourage you to use the following sites as starting points to learn more about using Logman.

Two Minute Drill:  LOGMAN.EXE

Logman

You can get PerfView from the Microsoft Download Center. Like Logman, PerfView is well documented on MSDN, other’s blogs, and its own online help system. For this series of articles, I’ll assume you have downloaded PerfView and can run it successfully on your machine. Like Logman, I’ll discuss just enough information needed for this article and encourage you to reference the following sites to learn more about using PerfView.

Channel 9 videos  PerfView Tutorials

Vance Morrison’s Blog

Consider you want to start instrumenting a sales order application called BizGear. BizGear manages the sales of products related to running a business. Like many enterprise applications, BizGear is a multilayered application (Presentation, Business, Data), and contains several areas of responsibility (contexts) such as Product Returns, Administration, Billing, Inventory, and Sales.

We want to start instrumenting the Sales portion of BizGear.

For simplicity of demonstration, I’ll use a console application to demonstrate writing events. In a real world application, the code demonstrated in the console application would be used in the application’s modules.

 


Implementing EventSource classes

Start by creating a solution containing a console application project called SimpleDemoConsole and a class library project called EventSources.

Add two files to your EventSources project: EventSourcesNames.cs and SalesContextEventSource.cs.

Your solution file should appear similar to this:

clip_image001

EventSourceNames contains the user friendly names of the event sources you will be adding through this series.

Recall there are many ETW providers in the Windows OS as well as other applications using EventSource to generate events. Each event source needs to be uniquely named so that a controller doesn’t accidently turn on or off events from multiple events sources. To see a list of OS registered event sources, in a command prompt window, enter logman query provider.

By convention and to make your event source unique from other event sources in the operating system and applications, you should use the naming convention <company name> – <application name> – <area of instrumentation>.

As a starting point, we’ll call our company “ACME”. Our sales application is BizGear, and we want to instrument the part of the sales application responsible for processing sales—the sales context.

Given the above, the following is our first version of EventSourceNames

EventSourceNames
  1. namespace EventSources
  2. {
  3.     public static class EventSourceNames
  4.     {
  5.         //guid for this provider:  {d5b29467-62f5-54a9-4861-96cf631b95b4}
  6.         public const string SalesContext = "Acme-BizGear-SalesContext";
  7.  
  8.         //guid for this provider:  {9a9cf874-7496-5df5-6e80-1c5804eccd57}
  9.         public const string BusinessContext = "Acme-BizGear-InventoryContext";
  10.  
  11.         //guid for this provider:  {3e4539f0-447d-5791-0b48-ee4106c9ced8}
  12.         public const string MerchandiseReturnContext = "Acme-BizGear-MerchandiseReturnsContext";
  13.     }
  14. }

 

Next we want to implement the first version of the class which will generate events. We’ll call this class SalesContextEventSource.

SalesContextEventSource
  1. namespace EventSources.EventSources
  2. {
  3.     using System.Diagnostics.Tracing;
  4.  
  5.     //guid for this provider:  {d5b29467-62f5-54a9-4861-96cf631b95b4}
  6.     [EventSource(Name = EventSourceNames.SalesContext)]
  7.     public class SalesContextEventSource : EventSource
  8.     {
  9.         public static readonly SalesContextEventSource Log = new SalesContextEventSource();
  10.  
  11.         [Event(1, Level = EventLevel.Informational, Keywords = Keywords.Presentation)]
  12.         public void Presentation(string className, string methodName, string data){ if (IsEnabled()) WriteEvent(1, className, methodName, data); }
  13.  
  14.         [Event(2, Level = EventLevel.Informational, Keywords = Keywords.Business)]
  15.         public void Business(string className, string methodName, string data) { if (IsEnabled()) WriteEvent(2, className, methodName, data); }
  16.  
  17.         [Event(3, Level = EventLevel.Informational, Keywords = Keywords.DataAccess)]
  18.         public void Data(string className, string methodName, string data) { if (IsEnabled()) WriteEvent(3, className, methodName, data); }
  19.  
  20.         [Event(4, Level = EventLevel.Informational, Keywords = Keywords.Service)]
  21.         public void Service(string className, string methodName, string data) { if (IsEnabled()) WriteEvent(4, className, methodName, data); }
  22.         
  23.         public class Keywords
  24.         {
  25.             public const EventKeywords Presentation = (EventKeywords)0x0001;
  26.             public const EventKeywords Business = (EventKeywords)0x0002;
  27.             public const EventKeywords DataAccess = (EventKeywords)0x0004;
  28.             public const EventKeywords Service = (EventKeywords)0x0008;
  29.         }
  30.     }
  31. }

On line 6, we attribute the class with the friendly name. There is a standard, RFC4122, for converting the friendly name to a GUID. Some tools may accept the friendly name given to your derived EventSource class and convert the friendly name internally to a GUID before interfacing to the ETW subsystem. Other tools use the GUID to specify the event provider’s identity to the ETW subsystem. So as a quick reference, the GUID of Acme-BizGear-SalesContext is added in the comment on line 5. I’ll show you later in the console application how to quickly find the GUID for SalesContextEventSource.

Line 11 contains metadata for the Presentation method. Usually when implementing a method used for adding instrumentation for an application, I specify the Level = Informational.

One line 12, notice IsEnabled().  IsEnabled() returns true only when the ETW subsystem has a session established for this event provider via a controller. Because WriteEvent() is not performed unless a controller enables the ETW subsystem to receive events from this provider, this is one of the reasons EventSource has a very low performance overhead when used to instrument your application. If you were to examine the IL generated by this code you would find that very few instructions are actually executed when no events are being sent to the ETW subsystem.

So, even though you may have many points in your application containing event logging, the overhead is very, very, minor. And when a controller enables event captures, the overhead to your application remains low because most of the work is generating the event message (partly defined in the meta data in line 11 and partly via the parameters in line 12) to ETW, and ETW handles, in another process, the buffering and multithreading to write the event to a log file.

Note: Something important I want to point out on lines 11 and 12 is the event numbers in the metadata and in the WriteEvent must be the same.

[Event(1, ……
…..WriteEvent(1, ….

If you make a mistake of not setting the value in the metadata to match with the value in WriteEvent, event generation will silently fail. Reading MSDN blogs, tooling such as FxCop may be made available in the future to warn the user of a mismatch. For now the best advice I can offer is be careful in ensuring the event number in your Event() metadata matches the event number in WriteEvent()!

The Keywords metadata we’ll ignore for now. I’ll cover more about using Keywords in a future article.

 


Implementing Console Application

Now we implement a console application.     I find the console application useful to quickly verify the operation of generating ETW events as you make changes to your classes derived from EventSource.

Simple Demo Console
  1. namespace SimpleDemoConsole
  2. {
  3.     #region usings
  4.  
  5.     using System;
  6.     using System.Diagnostics.Tracing;
  7.     using EventSources.EventSources;
  8.     using System.Threading;
  9.  
  10.     #endregion
  11.  
  12.     class Program
  13.     {
  14.         static void Main()
  15.         {
  16.             DisplayGuids();
  17.  
  18.             DisplayEnabled();
  19.  
  20.             DisplayEventLevels();
  21.  
  22.             for (var ctr = 0; ctr < 5000; ctr++ )
  23.             {
  24.                 if ((ctr % 100) == 0)
  25.                     Console.WriteLine(ctr);
  26.  
  27.                 Thread.Sleep(1);
  28.                 TestSalesLogging();
  29.  
  30.                 TestBusinessLogging();
  31.  
  32.                 TestMerchandiseLogging();
  33.             }
  34.  
  35.             Console.WriteLine("Finished");
  36.             Console.ReadLine();
  37.         }
  38.  
  39.         private static void DisplayGuids()
  40.         {
  41.             var salesGuid             = SalesContextEventSource.Log.Guid.ToString();
  42.             var businessGuid          = BusinessContextEventSource.Log.Guid.ToString();
  43.             var merchandiseReturnGuid = MerchandiseReturnContextEventSource.Log.Guid.ToString();
  44.  
  45.             Console.WriteLine("Sales Guid\t\t{0}", salesGuid);
  46.             Console.WriteLine("Business Guid\t\t{0}", merchandiseReturnGuid);
  47.             Console.WriteLine("Merchandise Return Guid\t{0}", merchandiseReturnGuid);
  48.         }
  49.  
  50.           private staticvoid DisplayEnabled()
  51.         {
  52.             var isSalesEnabled       = SalesContextEventSource.Log.IsEnabled();
  53.             var isBusinessEnabled    = BusinessContextEventSource.Log.IsEnabled();
  54.             var isMerchandiseEnabled = MerchandiseReturnContextEventSource.Log.IsEnabled();
  55.  
  56.             Console.WriteLine("Sales enabled \t\t{0} ", isSalesEnabled);
  57.             Console.WriteLine("Business enabled \t{0} ",  isBusinessEnabled);
  58.             Console.WriteLine("Merchandise enabled \t{0}", isMerchandiseEnabled);
  59.         }
  60.  
  61.         private static void DisplayEventLevels()
  62.         {
  63.             var always        = Convert.ToInt32(EventLevel.LogAlways);
  64.             var critical      = Convert.ToInt32(EventLevel.Critical);
  65.             var error         = Convert.ToInt32(EventLevel.Error);
  66.             var warning       = Convert.ToInt32(EventLevel.Warning);
  67.             var informational = Convert.ToInt32(EventLevel.Informational);
  68.             var verbose       = Convert.ToInt32(EventLevel.Verbose);
  69.  
  70.             Console.WriteLine("always\t\t\t{0}", always);
  71.             Console.WriteLine("critical\t\t{0}", critical);
  72.             Console.WriteLine("error\t\t\t{0}", error);
  73.             Console.WriteLine("warning\t\t\t{0}", warning);
  74.             Console.WriteLine("informationional\t{0}", informational);
  75.             Console.WriteLine("verbose\t\t\t{0}", verbose);
  76.         }
  77.  
  78.         private static void TestSalesLogging()
  79.         {
  80.             var tstMsg1 = "111111111_123456789_123456789_123456789_123456789_123456789_123456789_123456789_123456789_123456789_";
  81.             var tstMsg2 = "123456789_123456789_123456789_123456789_123456789_123456789_123456789_123456789_123456789_123456789_";
  82.             var tstMsg3 = "123456789_123456789_123456789_123456789_123456789_123456789_123456789_123456789_123456789_123456789_";
  83.             var tstMsg4 = "123456789_123456789_123456789_123456789_123456789_123456789_123456789_123456789_123456789_123456789_";
  84.             var tstMsg5 = "123456789_123456789_123456789_123456789_123456789_123456789_123456789_123456789_123456789_123456789_";
  85.             var tstMsg6 = "123456789_123456789_123456789_123456789_123456789_123456789_123456789_123456789_123456789_123456789_";
  86.             var tstMsg = tstMsg1 + tstMsg2 + tstMsg3 + tstMsg5;
  87.  
  88.             SalesContextEventSource.Log.Presentation("Sales_Presentation_className", "Sales_presentation_MethodName", tstMsg);
  89.             SalesContextEventSource.Log.Presentation("Sales_Presentation_className", "Sales_presentation_MethodName", "Sales presentation data");
  90.             SalesContextEventSource.Log.Business("Sales_Business_ClassName", "Sales_business_MethodName", "sales business Data");
  91.             SalesContextEventSource.Log.Data("Sales_Data_ClassName", "Sales_data access_MethodName", "sales data access data");
  92.         }
  93.  
  94.         private static void TestBusinessLogging()
  95.         {
  96.             BusinessContextEventSource.Log.Presentation("Business_Presentation_className", "Business_presentation_MethodName", "Business presentation data");
  97.             BusinessContextEventSource.Log.Business("Business_Business_ClassName", "Business_business_MethodName", "Business business Data");
  98.             BusinessContextEventSource.Log.Data("Business_Data_ClassName", "Business_data access_MethodName", "Business data access data");
  99.         }
  100.  
  101.         private static void TestMerchandiseLogging()
  102.         {
  103.             MerchandiseReturnContextEventSource.Log.Presentation("MerchandiseReturn_Presentation_className", "MerchandiseReturn_presentation_MethodName", "MerchandiseReturn presentation data");
  104.             MerchandiseReturnContextEventSource.Log.Business("MerchandiseReturn_MerchandiseReturn_ClassName", "MerchandiseReturn_MerchandiseReturn_MethodName", "MerchandiseReturn business Data");
  105.             MerchandiseReturnContextEventSource.Log.Data("MerchandiseReturn_Data_ClassName", "MerchandiseReturn_data access_MethodName", "MerchandiseReturn data access data");
  106.         }
  107.     }
  108. }

 

Line 41 shows an easy way to find the value of the GUID of your event source’s friendly name. Put a breakpoint on line 41, then copy and paste the value read in the debugger into the comment on line 5 of the SalesContextEventSource.

Lines 78 – 91 is an example of how you would generate events from your application.

 


Using Logman

Now that you have everything needed to generate an event to ETW, the next step is using Logman as a controller to enable event capture.

Recall from above, a good starting point for using Logman is the following link:

Two Minute Drill:  LOGMAN.EXE

First, start a command prompt window running with Administrator privileges. I do this by going to the Start Menu, select Microsoft Visual Studio 2012, select sub menu Visual Studio Tools, sub menu Developer Command Prompt.   Right click and select Run as administrator in the popup context menu.

clip_image008

Your command prompt windows should show Administrator in its title. The next thing I do is create a working directory. I called mine ETWTraces.

clip_image010

Because specifying the parameters to Logman quickly becomes tedious, I create a .cmd file.

What you want to do is create a data collector which you can start and stop.

Enter the following, as one line, into a file called create.cmd

LOGMAN create trace SalesContext -o C:\ETWTraces\SalesContext.etl
-p “{d5b29467-62f5-54a9-4861-96cf631b95b4}” -f bincirc -max 1000

When you execute the .cmd file, the above command will create a data collector called SalesContext. You’ll use Logman to enable the ETW subsystem to start listening for the events from the provider SalesContextEventSource.

This is an example, of using Logman, where you need to use the GUID to specify the name of your event provider Acme-BizGear-SalesContext.

Events sent from your event provider to the ETW subsystem are saved in the Event Trace Log (ETL) file SalesContext.etl . The other Logman parameters specify the file’s maximum size and the capture is to a circular file. This feature is very useful when you want to collect events for a long period of time capturing some activity in your application, without concern the file will grow overly large, or specifying a fixed size file and risk losing important data.

Note: I found that you cannot use the circular buffer option in Logman and then view the data in PerfView.   You need to use PerfView to capture the data and configure PerfView to create a circular buffer.    I found that Logman overwrites meta data in the ETL file which PerfView needs.   If you want to let data collection take place for a long time period by using a circular buffer, use PerfView to capture the events instead of Logman.    I plan to write a more detailed blog about this in the future.

Now run create.cmd in the command prompt window.

Next enter

logman /query

Your command prompt window should appear like this

clip_image012

To start listening for events on the SalesContext data collector, enter

Logman /start SalesContext

Next run the console application SimpleDemoConsole to generate an event.

Next enter the following on the command prompt to stop listening for events from Acme-BizGear-SalesContext

Logman /stop SalesContext

When you stop data collection, the controller – Logman-  notifies the ETW subsystem to close the session it started and to flush any events it has in its internal buffers to the ETL file.

A nice feature to notice is every time you start and stop the data collector, the ETW subsystem will write a new file with a different version number.

For example, I started and stopped the SalesContext data collector twice, running the console application to generate events between each stop / start. ETW created the following two ETL files:

clip_image013

 


Using PerfView

So far you learned how to create an event provider to generate events. You learned how to use Logman as a controller to interface to ETW to start and stop event captures. Now that you have ETL files containing captured events, the next step is viewing the data.

After downloading PerfView, configure your environment path so PerfView will run from any directory in the command prompt window.

Enter PerfView on the command prompt to start PerfView. When PerfView starts, it will initialize its navigation window to display the ETL files it finds in the directory:

clip_image015

All you need to do is select the ETL file containing the event data you want to view, and click the Events node:

clip_image016

Doing so displays a dialog like this:

 clip_image018

In this window, you’ll see “Acme-BizGear-SalesContext/Presentation” as one of the nodes in the left pane:

clip_image019

PerView has parsed the ETL data and recognized the friendly name of the event provider: “Acme-BizGear-SalesContext”

PerfView, using the metadata Keywords, parsed the event to belong to the keyword “Presentation” in the SalesContextEventSource class.

Finally, in the right side of the dialog, PerfView shows the event name, timestamp (to micro second resolution), the process and thread identity, and the data passed to the parameters in

SalesContextEventSource.Presentation(string className, string methodName, string data);

clip_image021

With what you learned here, you have the start of being able to instrument your application to acquire performance data. In the next articles, I’ll build upon this starting point.

Posted in ETW | Tagged , | Leave a comment

Instrumenting Your Application to Measure its Performance Part 2 An Overview of the ETW subsystem

 

The ETW subsystem has been written about extensively in other blogs and magazine articles. What I wish to accomplish here is provide a brief overview of the ETW subsystem to provide a common context for the future articles on this blog of using the ETW subsystem. To drill deeper into understanding the ETW subsystem, I suggest the following MSDN Magazine article as a starting point:

Improving Debugging and Performance Tuning with ETW
http://msdn.microsoft.com/en-us/magazine/cc163437.aspx

ETW provides a high speed, low overhead, system wide data collection subsystem.

The following is an architectural view of the ETW system copied from the article referenced above:

image

Let’s simplify what this diagram is about.

An Event is a message to the ETW system containing data for logging. In context of these articles, an Event is the message generated using the EventSource class.

A Controller is an application interfacing to the ETW subsystem to enable or disable event collection. A controller is usually an out of box application such as Logman, PerfView, or the Semantic Logging Application Block (SLAB). A controller can also be your own custom application written to interface to ETW.

A Provider is an application sending event messages to the ETW subsystem. In the context of these articles, a provider is your application, using the EventSource class to send messages to the ETW subsystem.

A Consumer is an application gets events from the ETW subsystem. The consumer may read and display the events to the user in real time or read events from a file.

A Session is the ‘pipe’ between ETW and the controller and establishes the internal buffers and memory management of event collection to an Event Trace Log (ETL) file.

Often applications are both controllers and consumers. For example, PerfView can be used as a controller to enable and disable one or more event providers. After stopping event collection and ETW completes writing events to an ETL file, PerfView collects (consumes) the events from an ETL file for analysis.

Or as another example, you can use Logman as a controller to enable and disable event generation. Then use PerfView to consume events from a ETL file.

Similarly, SLAB is often both a controller and consumer. SLAB can be configured to enable (control) events and consume events, directing the captured events to a console, a flat file, a rolling flat file, a database table, or the NT EventLog.

Pub-Sub Pattern

For those familiar with the Publisher Subscriber pattern, you may be thinking “That ETW Subsystem looks a lot like a Pub Sub system”. And I would agree with you.

From Wikipedia Publish-Subscribe pattern
http://en.wikipedia.org/wiki/Publish%E2%80%93subscribe_pattern

In software architecture, publish–subscribe is a messaging pattern where senders of messages, called publishers, do not program the messages to be sent directly to specific receivers, called subscribers. Instead, published messages are characterized into classes, without knowledge of what, if any, subscribers there may be. Similarly, subscribers express interest in one or more classes, and only receive messages that are of interest, without knowledge of what, if any, publishers there are.

Let’s reword the above Wikipedia passage a little to put it into context of ETW with events generated from an application using the EventSource class.

The ETW Subsystem uses the publish-subscribe pattern where senders of events, called providers do not program the messages to be sent directly to specific receivers, called consumers. Instead, published events are characterized into classes (derived from EventSource) without knowledge of what, if any consumers (PerfView or SLAB) there may be. Similarly, controllers express interest in one or more events defined in classes derived from EventSource) and consumers only receive messages that are of interest, without knowledge of what, if any providers there are.

Remember that the ETW subsystem has been packaged with Windows since Windows 2000. The beauty of the ETW system is that it has so many providers within the Windows ecosystem. Providers such as the .NET Garbage Collector, disk I/O subsystem, and CPU—where you can actually drill down on CPU activity per core on your machine.

The advantage of ETW over ad hoc solutions it does memory, thread and buffering management to provide a high speed, low overhead, data collection system— external to your process.

How fast and how much overhead does using ETW have when you use it to instrument your application?

The following snippet is from the Vance Morrison’s blog Windows high speed logging: ETW in C#/.NET using System.Diagnotics.Tracing.EventSource.

http://blogs.msdn.com/b/vancem/archive/2012/08/13/windows-high-speed-logging-etw-in-c-net-using-system-diagnostics-tracing-eventsource.aspx

How fast is ETW? Well, that is actually super-easy to answer with ETW itself because each event comes with its own high-accuracy timestamp. Thus you can run the EventSource demo and simply compare the timestamps of back-to-back events to see how long an event takes. On my 5 year old 2Ghz laptop I could log 10 events in 16 usec so it take 1.6 usec to log an event. Thus you can log 600K events a second. Now that would take 100% of the CPU, so a more reasonable number to keep in your head is 100K. That is ALOT of events. The implementation does not take locks in the logging path and any file writes are handled by the operating system asynchronously so there is little impact to the program other than the CPU it took to actually perform the logging.

I encourage you to read the complete referenced post.

Because you can collect data from multiple providers, you can obtain a more holistic view of your applications performance than using ad hoc tracing / logging solutions.

Consider the story of three blind men asked to describe an elephant. One blind man stood in front of the elephant and ran his hands along the elephant’s trunk. The second blind man stood beside the elephant ran his hands along the elephant’s side. The third blind man stood behind the elephant and ran his hand along the elephant’s tail. Each blind man tried to describe an elephant. Because each had an incomplete view of the elephant, none of the blind men described the elephant correctly.

Using ad hoc approaches to capture performance data in a large scale enterprise wide application, an application defined as multiple layers, using service oriented architecture, utilizing multiple threading, having multiple touch points and multiple user loads, you may find your application to be an elephant and yourself like one of the story’s blind men.

The beauty of the ETW system is that once your application becomes a provider, you can use a controller to enable events from your application as well as other providers and consolidate that data to get a holistic system view.

You can partition your application into areas of responsibility and use features of EventSource to filter specific parts of the application.

Think of your application in terms of contexts, where each context becomes an event provider. For example, consider a sales order application. Taking the idea of Bounded Context from domain driven design, you may view your sales order application as groups of responsibility. One area of the application is responsible for customer service, another part of the application is responsible for product returns, another for sales, and another for billing. By putting contextual boundaries around parts of your application, you give consideration to what ‘context’ that code belongs to.

In terms of instrumentation, your application is not just a sales order application, but is a grouping of contexts– for example a sales context and a billing context. By viewing your application as groups of contextual responsibilities, you can provide finer grain control of application instrumentation.

Given a basic understanding of the ETW subsystem and how using it can help identify performance problems, in the next article of this series I’ll describe the simplest way to make your application an event provider and how to use the Logman and PerfView tools.

Posted in ETW | Tagged , | 1 Comment

Instrumenting Your Application to Measure its Performance Part 1 Introduction

 

A common problem I’ve experienced in past projects is measuring the performance of some part of an application.

Have you ever refactored a routine and needed to measure the performance difference between the original and the new code? Has a customer ever told you your application is too slow? (Seldom does a customer tell you the application is too fast!). As you add new features to your application and you deploy new versions, do you know the changes in your latest version’s performance? Have you experienced performance issues as your application’s user base grows? Can you identify bottlenecks in your application and determine how memory, cpu utilization, or database access are affecting your application’s performance?

I don’t know about you—maybe I’m just unlucky—but almost every large application I’ve dealt with has had to deal with performance problems.

Performance issues may arise from poorly coded algorithms, poor memory management, excessive network calls, excessive disk activity or anemic hardware. Often bottlenecks don’t become apparent until the user load increases. Have you ever worked on a project where a team member said (or you said yourself), “It works great on my desktop! What do you mean there’s a problem when 500 users try to log in when they come into the office in the morning?”

Before you can effectively respond to performance issues in your application, you need to have a baseline measurement. If you don’t have a baseline, you are immediately in reactive mode when dealing with a failing system. Examples of reactive mode behaviors include trying to placate the customer(s) while working overtime to fix a problem, recycling the app pool to flush memory, or investigating the cause of database locks.

You need to be proactive and build instrumentation into your application as you develop it, and acquire performance data during development and testing before deploying to production.

In my experience, when trying to identify performance hot spots, the development team used ad hoc methods to acquire performance data.

I’ve seen a variety of techniques used in past projects to instrument an application. The most common way is adding logging statements using a framework like Log4Net or the Enterprise Application Logging block. Sometimes the development team created their own ‘framework’ using writes to a custom database table or flat file. In addition to the liberal sprinkling of write/trace statements through the code base is frequent usage of the Stopwatch class to capture timings.

Ad hoc solutions like these often create several problems.

First, the ad hoc solution provides no standard tooling available to control the data capture. Often the data capture is controlled by editing a configuration. Performance data is usually targeted to a flat file or a database. Sometimes custom code is needed to control the destination of the data—to a database or flat file, and additional custom code is created for circular buffering.

Often the instrumentation to capture performance metrics doesn’t work well for a production environment. The instrumentation is added as an afterthought and a special hotfix is deployed to production containing the instrumentation. After the performance issue is resolved, the instrumentation is removed and the fixed application is deployed.

Another problem with many ad hoc solutions is called the observer effect. The observer affect is where measurements of a system cannot be made without affecting the system. Many ad hoc solutions, in adding statements to capture data to a flat file or database may cause changes in the application’s performance as the process writes to the disk or performs database inserts.

And finally, many ad hoc solutions provide a narrow view of an application’s performance problems. Ad hoc solutions I’ve seen make it hard to see a holistic view of the environment in which you can examine memory, CPU usage, disk I/O, or network traffic, in relationship to data collected for the application.

A solution I’ve been exploring which avoids the above problems uses the Event Tracing for Windows (ETW) subsystem to instrument your application.

Prior to .NET 4.5, using the ETW subsystem to instrument an application has been difficult .NET developers. Although the ETW subsystem has been part of the Windows operating systems since Windows 2000, the interfaces to connect to ETW were extremely unfriendly to the .NET programmer. Interfacing to ETW required the user to create a manifest file, register the manifest file, use a resource compiler, and interface to several native methods.

With the introduction of the EventSource class in .NET 4.5, interfacing to the ETW subsystem has become as easy as falling off a log. *

The ETW subsystem dramatically simplifies capturing data to measure the performance of your application. ETW and tools using it offer dramatic advantages over ad hoc solutions for controlling event generation and capturing events for analysis in development and production systems.

The following posts in this series will give an overview of the ETW system, using tools to control data collection through ETW, developing your own tools, and present ideas and techniques to effectively instrument your application.

* Although the EventSource class is built into Net 4.5, for projects which are not yet able to update to NET 4.5, you can reference the EventSource.DLL in your NET 4.0 project.

Posted in ETW | Tagged , | 1 Comment

Instrumenting Your Application to Measure its Performance Roadmap

 

This is roadmap to a series articles about instrumenting your application using the NET 4.5 EventSource class, the ETW subsystem, and using out of the box utilities to acquire performance data.

You can find the code for these articles on my GitHub repository BizGear 

The following are articles I plan to write for this series.

· Part 8 Using EventListener in your application

· Part 9 Forwarding events using service bus

· Part 10 Writing a custom Event Monitor

· Part 11 Introducing Semantic Application Logging Block (SLAB)

· Part 12 Using SLAB within your application

· Part 13 Using SLAB as an outside your application (out of process)

Posted in ETW | Tagged | 1 Comment

Connecting to a Workflow Service from LightSwitch Part 3

 

Overview

In the third and final post of this topic I’ll show one way of using the results from a workflow service from a LightSwitch application.

The content of this post draws heavily from two other blog posts:

Beth Massi’s How Do I Video: Create a Screen that can Both Edit and Add Records in a LightSwitch Application

and

Robert Green’s Using Both Remote and Local Data in a LightSwitch Application

Take note of some problems I have in this demo implementation. Look for Issue near the end of this post for more discussion.

As time permits, I plan to investigate further and will be soliciting feedback from the LightSwitch developer’s forum.

If you know of a different implementation to avoid the problems I’ve encountered, please don’t hesitate to send me an email or comment here.


Implementation

I want to keep the example easy to demonstrate, so I’ll usd only the OrderHeader table from the AdventureWorks sample database.

The OrderHeader table contains fields such as ShipMethod, TotalDue, and Freight. For demo purposes I’m treating the data in Freight as Weight, instead of a freight charge.

Based on its business rules, the workflow service returns a value of a calculated shipping amount. The OrderHeader table does not contain a ShippingAmount. So I thought this a good opportunity to use what Beth Massi and Robert Green explained in their blog to have LightSwitch create a table in its intrinsic database and connect that table’s data to the AdventureWorks OrderHeader table.

In Part 2 of this topic, you created a LightSwitch application and added Silverlight library project to the solution containing an implementation of a proxy to connect to the workflow service. In the LightSwitch project, you should already have a connection to the AdventureWorks database and the SalesOrderHeader table.

Now add a new table to the project. Right click on Data Source node, and select Add a Table in the context menu. Call the table ShippingCost. This new table is created in the LightSwitch intrinsic database.

Add a new field, ShipCost of type Decimal to the table.

Add a relationship to the SalesOrderHeaders table, with a zero to 1 and 1 relationship.

clip_image001

Change the Summary Property for ShippingCost from the default Id to ShipCost.

clip_image003

Now add a search and detail screen for the SalesOrderHeader entity. The default naming of the screens are SearchSalesOrderHeader and SalesOrderHeaderDetail.

I want to use the SalesOrderHeaderDetail as the default screen when a user clicks the Add… button on the search screen.

How to do this is explained in detail in Beth Massi’s How Do I video and Robert Green’s blog.

In the SearchSalesOrderHeader screen designer, Add the Add… button. Right click the Add… button and select Edit Execute Code in the context menu. LightSwitch opens the SearchSalesOrderHeader.cs file. Add the following code to the griddAddAndEditNew_Execute() method for LightSwitch to display the SalesOrderHeaderDetail screen.

 

clip_image004

Now open the SalesOrderHeaderDetail screen in the designer. Rename the query to SalesOrderHeaderQuery. Add a SalesOrderHeader data item, via the Add Data Item menu on the top of the designer. I’m being brief on the detail steps here because these steps are very well explained in the Beth Massi’s video and Robert Green’s blog.

Your screen designer should appear similar to this:

clip_image006

Click on the Write Code button in the upper left menu in the designer and select SalesOrderHeaderDetail_Loaded under General Methods.

With the SalesOrderHeaderDetail.cs file open, add a private method which this screen code can call for getting the calculated shipping amount.

clip_image008

This method creates an instance of the proxy to connect to the workflow service and passes the Freight (weight), SubTotal, and ShipMethod from the selected SalesOrderHeader row. (Lines 87-90)

It returns the calculated value with a rounding to two decimal places. (Line 91)

When the user selects a SalesOrderHeader item from the search screen, the code in SalesOrderHeaderDetail_Loaded() is called and the SalesOrderID.HasValue will be true. (Line 17)

clip_image010

The _Loaded() method calls the workflow service to get the calculated shipping data. (line 21)

It then checks if the selected row already contains a ShippingCost item. (Line 23).

When you first run the application, the ShippingCost item will be null because no ShippingCost item was created and associated with the SalesOrderHeader item from the AdventureWorks data.

So the implementation creates a new ShippingCosts entity, sets the calculated ShippingCost, associates the new SalesOrderItem entity to the new ShippingCosts entity, and saves the changes to the database. Lines 25-29.

If the SalesOrderHeader does have a ShippingCost, the code gets the ShippingCost entity, updates the ShipCost property with the latest calculated shipping cost and saves the update. (Line 33 – 38)

Lines 19 – 40 handle the case when the user selects an existing SalesOrderHeader item from the search screen. Here the implementation handles the case when the user selects the Add… button and the SalesOrderID.HasValue (Line 17) is false.

clip_image012

For this condition, the implementation creates a new SalesOrderHeader item and a new ShippingCost item. The AdventureWorks database defaults some of the fields when a new row is added. It requires some fields to be initialized, such as the rowguid, to a new unique value. (Line 53)

For this demo, to create a the new SalesOrderHeader without having an exception, I initialized the properties with values shown in Lines 45 – 53. Similarly, I initialized the properties of the new ShippingCost item as shown in lines 56 – 58.

Note the setting of ShipCost to -1 in line 56. This is another ‘code smell.’ I set the property here and check for that value in the SalesOrderHeaderDetail_Saved() method. I found when this new record is saved, in the _Saved() method, it’s necessary to call the GetCalculatedShipping() method to overwrite the uninitialized (-1) value of the Shipping cost.

Issue There must be a better way of doing this implementation and I would love to receive comments in email or to this posting of ways to improve this code. This works for demo purposes. But it doesn’t look right for an application I’d want to use beyond a demo.

Finally, there is the case where the user has clicked the Add… button and saves the changes or where the user has made a change to an existing SalesOrderHeader item and clicks the Save button.

clip_image014

Here, I found you need to check if the SalesOrderHeader.Shipping cost is not defined. (Lines 69 – 76).

If there is not an existing ShippingCost entity associated with the SalesOrderHeader item, a new one is created (line 71) and its properties are set (lines 73 – 75). LightSwitch will update the database appropriately.

And there is that special case mentioned above, where SalesOrderHeader does contain an existing ShippingCost item, but contains a non-initialized ShippingCost item. (Line 77). For this case, the ShipCost is updated.

Issue What I also discovered in this scenario that the user needs to click the Save button twice in order to have the data for both the SalesOrderHeader and the ShipCost entities saved.

Perhaps I haven’t done the implementation here as it should be. Or perhaps I’ve come across a behavior which exists in the Beta 1 version in the scenario of linking two entities and needing to update the data in both of them.

Again, I solicit comments for a better implementation to eliminate the problems I tried to work around.

I think some of the problems in the demo arise from using the intrinsic table ShippingCost table with the external AdventureWorks table and the way LightSwitch does validation when saving associated entities.

If I modified the schema of the AdventureWorks OrderHeader table to include a new column, say for instance, a CalculatedShippingCost column, this code would become less problematic, because it would be working with just the SalesOrderHeader entity, and not two related entities in two databases.

Posted in LightSwitch | 5 Comments