/ Insights / How to Un-Screw-Up Your ServiceNow Environment: Watch the Recording Now! Insights How to Un-Screw-Up Your ServiceNow Environment: Watch the Recording Now! September 7, 2023Your ServiceNow environment has been in place for years. It feels like you’ve continued to layer problem after problem into how it’s been configured. How do you fix it? How do you get to a position where you can leverage new capabilities quickly and ease the use of the platform as a whole? The Concurrency team will discuss common mistakes made in ServiceNow environments, why they were made, and how to make it better.Common mistakes madeHow to fix them and improve the environmentWhen to “start over” vs. “fix”The best new decisions to make Transcription Collapsed Transcription Expanded Nathan Lasnoski 6:36 Alright, well, I’m gonna play a very minimal role in this this session because we have Brian and we have Michael who are the clear experts in this domain. And this is opportunity for us to kick off probably the most interestingly named event of the session, how to unscrew up your ServiceNow environment. Nathan Lasnoski 6:56 It was sort of equally tied to how I learned to stop worrying and love auto box for all of you, Doctor Strangelove fans, but I think this is really where we’re seeing a lot of common conversations from customers is many, many deployed existing ServiceNow environments that could get better. Nathan Lasnoski 7:15 This isn’t necessarily about just like starting from scratch, it’s about how do we look at our ServiceNow environment in a way that has a eye toward making it better than what it is today. Nathan Lasnoski 7:25 Building on top of it, improving it and addressing these common issues that exist within our ServiceNow environment. So in order to do that, let’s introduce our main speakers. Brian, why don’t you start first? Bryan Schrippe 7:37 Thanks, Dave. So my name is Brian Trippy. I’m the lead technical architect and solutions architect for the ServiceNow practice at concurrency. I have about 21 years in the ITSM in automation space. Umm Part time developer and full time ServiceNow advocate, so I’ve done about 70 so implementations across multiple verticals and varying size organizations from enterprise Fortune 500 all the way down to small small business. So I’ve pretty much seen the gamut when it comes to implementations as well as good implementations and bad implementations, and we’re going to share some insights about what we’ve seen. And then I’m going to pass it to my right hand man, Michael Dugan. Michael Dugan 8:26 Are we able to advance next slide? I think our our lovely photos are there a all Mike Duggan. Bryan Schrippe 8:32 E. Michael Dugan 8:32 Uh, I know. Nathan Lasnoski 8:32 There you go. Michael Dugan 8:33 Thank you. Yeah, I traditionally go by my last name, Dugan. I’ve come from a time here at this company where we had like 5 mikes when I started, but so feel free to do as me as that. Yeah, I think I’m gonna. Brian always blows me out of the water with his background. I think I’m going on 9 years here, heavily involved in the ITSM and ITOM space. A little bit of stops as well. Working on that and then, yeah, no stranger to automation. Love if we have to script, we can otherwise leveraging the power of the platform and the GUI and doing that. Michael Dugan 9:00 So happy to share insights and experiences I’ve had too. I’m really excited for this so. Nathan Lasnoski 9:06 One thing I’d interject here is we’re introducing is. I’m sure this topic is gonna queue up a lot of questions, so we want to answer those. That’s a reason why we have two speakers on this call is we want to be able to answer your questions real time. So as you have them, as we go through kind of each detailed area, put them in the chat and we’ll either pull them into the actual live presentation or we’ll actually answer them in the chat and make that a benefit to everyone. So don’t hesitate to use that from the start here. Bryan Schrippe 9:36 Awesome. Thank you. So let’s go into our presentation on common mistakes and how we can go about fixing them. So how does one identify a screwed up environment? So we’re gonna go through a list of the common identifiers that we’ve seen across multiple implementations in the first of those is inefficiencies and workflow and processes. So processes that are meant to be streamlined or still kind of manual and time consuming workflows are convoluted and require excessive manual intervention. Bryan Schrippe 10:14 And there’s a lack of automation where it should probably be implemented. Sometimes we see low user adoption, so users are reluctant to even use service now, and there’s a relative resistance to wanting to interact and produce information. Bryan Schrippe 10:34 They’re still stuck in that mechanism of, you know what? I’m just going to call somebody or I’m just going to send an email. You know, we’re starting to see a lot of IT organizations migrate over to completely nixing their entire phone trees for IT, and even nixing email. So having low user adoption can affect that kind of trajectory. Another issue we see is data quality. So data in the system, especially for service now can get so overwhelming that it becomes inaccurate, outdated and inconsistent, and that manifests itself in issues with being able to target specific systems. Being able to identify KPIs that are going on in the environment to justify, you know, either new lines of business or terminating applications because of cost overruns or financial information. If project data is not exactly correct, when we start seeing extremely large implementations with a lot of development, some of these data quality issues can present themselves. Customization overload. This is another big one, so the platforms extremely overly customized with you know really complex code scripts on the front end, client scripts on the back end, you know, extremely complex and deep portals and it can manifest in a bad user experience. It can cause upgradeability issues. We’ve often run into to customers that have not leverage best practices and have modified the out of box business rules and code sets and modules and uh widgets and things that come out of box with service now and then. Bryan Schrippe 12:36 They’re trying to upgrade and next to you know, they’re having to go through, you know, 30 or 40,000 skip logs every upgrade because they’ve got files all over the place that have just been extremely customized. Another big one we see performance problems, so you know system users are experiencing slow response times. There are slow page loads. There’s a lot of data being shown, hidden, moved around and it’s causing just lag in the system and then being able to pull back data and getting data back from the system because there’s extremely large data sets, a lot of a lot of those issues because can present themselves. Unresolved bugs and issues. So oftentimes we run into systems where you know they have just launched or implemented service now and people are starting to at least leverage the tool and then they start doing enhancements and enhancements and enhancements and then bug start presenting themselves, but they get put on the back burner because there’s so much front end work that needs to happen that a lot of these bugs and issues aren’t relatively being taken care of. And then it causes end users and even admins and analysts to have less productivity, less efficiency. They’re having a really hard time leveraging the system. To the fullest of its capabilities. Or documentation. So a lot of times when we come into clients that are having issues with their ServiceNow environment, when we ask them, you know how is this implemented? Was there a text back? Was there information provided to you either that you developed or the vendor? Whoever did the implementation developed that lets us know exactly what was configured, how it was configured, what settings were put into place. A lot of times when we see it, environments that have issues, there is an extreme lack of documentation and either knowledge has walked out of the building and left environments stranded with admins who are having to pick up the load to try and figure out what’s going on and trying to reverse engineer that can become problematic. High maintenance costs. So if we think of, you know, the administrative and maintenance costs that go into managing service service now and if there’s any issues or feature functionality problems and your admins are having to go through and make adjustments in it, fix bugs, that has a cost. Same thing with your end users. If your end users are having to spend time putting in tickets and they’re having they’re getting closed by nature of an automation that went awry or they’re not getting the notifications and they have to go back and ask for updates and all of that has a labor cost to it. So as those issues keep piling up, you know those costs can get high. OK, so common mistakes that we see in in environments that we run into that have issues, configurations that are causing performance issues. So we’re talking, we’re talking script issues, we’re talking UI, UX issues, we’re talking workflow and automation problems from a design perspective, complex processes. All of those things can have underlying performance impacts and we we see the manifest themselves in environments that are having issues. Configurations that cause upgradability problems. So like I said, in a first first discussion, when we go into clients that are having issues and they’ve gone ahead and started modifying some of the out of box umm modules and widgets and scripts and code bases, then it causes issues when upgrading and if environments are extremely customized and they run through an upgrade, that upgrade process can be extremely tedious. Bryan Schrippe 16:59 So by taking a look at your environment, streamlining how you’re doing your updates and all your configurations and your customizations, you can mitigate some of that. But we see a lot of that in clients. Nathan Lasnoski 17:15 I’d say that seems to be one of the bigger. I bet you are. Our audience is recognizing the same thing for themselves like this. This seems to be one of the sore thumbs that like causes people really to recognize how off the track or off the beaten track they’ve gotten with their ServiceNow environment. Like, that’s when they start coming to conversation of like, should we just start over or should we throw out this work that we’ve done when it takes them six months to go from version to version because of all the work that they’ve put in that may or may not be adding value with the new capabilities being offered down the pipeline? Bryan Schrippe 17:52 Yep, I would 100% agree. So another thing that we see is automation. Design complexity umm, you know, with the advent of workflow and now leveraging flow designer in all of its many facets, we are starting to see a lot of clients starting to stuff a lot of automation into singular workflows and singular flows and not taking into account the amount of time that it takes to run those specific things. So it can cause performance impacts that can cause workflow slowdown. It can cause concurrency issues where you know laws can stack up and at once it gets to a certain point. Some of those flows actually stop executing umm by changing the way you kind of do some of your automation design. Umm, you can kind of streamline some of that and in some cases you may even have to redesign, but having too many steps, having a lot of actions inside of activities, not using look, not using flows and subflows containerizing your automations. A lot of that can be mitigated. Another big one we see is script performance, so running large amounts of code in scripts and not passing that load off to the back end system to do processing, then returning information is a big issue since we see I remember a client that we ran into that had one business rule that was over six 700 lines of code and it would run every single time a form loaded. Now think about that from a performance perspective. Having to run all of that code on load every single time you open up a ticket, it can get pretty daunting O understanding and taking a look at your script. What files have been updated? How they’ve been updated, what rules have been updated? What script practices they’ve been leveraging all of that kind of comes into, comes comes to bear when we’re talking about script performance issues. Umm, we see lack of automated test frameworks. ATF, if if anybody doesn’t know, is a testing framework, that service now has developed that allows you to benchmark tests against your customizations, configurations, processes that can enhance the way that you do a upgrades, but can also help you validate your development efforts by creating ATF tests for your customizations. Bryan Schrippe 20:45 Although it’s a little bit of work upfront, can save you exponential amounts of time on the back end when you’re trying to do testing for things like uh, enhancements and configurations. But when we come into a client that all they’re doing is they’re testing with a spreadsheet of work steps, and they have to do hundreds of them. That’s or there’s a there’s a cost associated to that and those costs couldn’t be rather high. If you have a lot of processes and customizations environment differentials, so we see this a lot too. Umm, a lot of our clients are really good at cloning their environments. Some some clients that we’ve ran into have have issues with environments being different than their lower environment. So we’re talking dev, test, prod, UAT, sometimes a sandbox environment. Going through and looking at the cloning profiles, your data preservers, if we don’t see a lot of the the cloning history behind that, we may have an idea of maybe there’s been, you know, little to no development or in the environment. Or maybe there’s been massive amounts of development that have been pushed up to prod, but haven’t necessarily been cloned back down. Wow, there’s there are some issues that happen from a development cycle that that manifest themselves and you don’t have a properly cloned environment and proper data preservers. And that we kind of talked about process complexity in that if you have extremely complex processes that aren’t necessarily fully automated that have a lot of manual touch points, there could be a breakdown there and not only efficiency, but also the amount of time it takes to get a specific process completed. Bryan Schrippe 22:46 We see that a lot in environments that are having issues. Another big one is they’ve bought licensing only to not leverage it. Now is the the Cadillac of. I mean, Rolls Royce of platforms. And if you’ve bought licensing for multiple modules but you’re only using 10% of it, there is a lot of capability. There’s a lot of efficiency, there is a lot of. Relational data and insights that you could bring to bear about your business being left on the table and or high cost associated with that too, because when you signed up with ServiceNow, normally you’re signing up for a multi year deal. And if you’re paying for that licensing and not leveraging it, you know you’re basically paying for 10% of your product, not 100% of your product. Nathan Lasnoski 23:44 I think next to the configurations causing upgradeability issues. That’s probably the second biggest thing that we are seeing is companies have invested, they have a overpaid ticket tracking tool, right, they they do incident management and that’s about it. Bryan Schrippe 23:52 Yeah. Nathan Lasnoski 24:01 And they bought the licensing for the Cadillac, and it’s not because they don’t want to use it, but it’s because they’ve never gotten to a point where they can comfortably leverage those assets in the organization, either because it takes them so long, they are not to do it or the current invite environment is preventing them from making forward progress. Bryan Schrippe 24:18 Exactly another thing we see is not meeting customers where they are a lot of customers are strictly relying on phone and email to ingest incidents. They’re not properly leveraging a portal based infrastructure or a chat based infrastructure or an integration into teams infrastructure. There are so many avenues to meet your customers where they are in order to get, you know, incidents or solicit feedback or allowing them to self serve themselves that customers don’t take advantage of that they should be taking advantage of in order to see exponential gains in the not only their customer service but adoption. Uh, and then another. Can use case we see is limited use of CSDM. So a lot of companies are just getting started in their configuration journey. They know that they have assets and devices and things that they needed to track inside of the environment, but they don’t necessarily know how to leverage CSDM to the fullest of its capabilities to get the most out of it. Specifically around, you know your business services, technical services being able to aggregate CIS into those services? And then benchmark against incidents, problems, changes, requests against those services to make decisions. Umm, we see a lot of customers are using it as just basically glorified asset tracking and being able to associate incidents to service requests or incidents to CIS rather and not leveraging the full capabilities of what CSS then kind of offer. Bryan Schrippe 26:03 And then the final one is shadow development. So we’ve seen in some environments where you know maybe the securities not tight enough or maybe there are developers that are doing work in the background in dev environments and promoting into other environments without going through the platform owners. Where are you start seeing wonky issues in production and you have to go back and find out what’s been updated, what’s been modified and lack of change control processes, where releases and development sprints may be put into production that haven’t gone through proper testing. Bryan Schrippe 26:48 We see a whole bunch of issues that happen around shadow development and hopefully we can give you some tools and tricks to help mitigate some of that. OK, So what are some symptoms of suboptimal instant instance performance? So we’re talking slow load times. I API usage extremely large data sets. We’re talking tables with multi, multi, multi millions of rows now for enterprise companies that are that are having extremely large data sets where they’re doing 12 million records a year, that’s perfectly fine. Bryan Schrippe 27:30 They they’ve been given the infrastructure from service now that to set that up. But if we’re talking smaller companies that are maintaining extremely large audit tables right? Like ohh we need 7 to 10 years of information and they’re still leveraging multi multi 10s of thousands per week month. You know those datasets can kind of large and cause issues. Uh concurrency issues, so issues with, you know, automations running and database locking and automation is trying to update similar incidents. So trying to do things at the same time kind of becomes problematic. Indexing issues. So when you’re looking and trying to retrieve data from a table and the index isn’t being maintained, you can have really bad performance from a data retrieval perspective and it can cause issues and not only load times, but also on the back end on the front end when you know users are trying to create tickets and they have to go get relational data. All of that information could then become slowed down because you’re not maintaining indexing lengthy execution times. Tied the time it takes to do specific tasks like save an incident or submit or trigger an automation. Have that automation run and we’re talking things like scripts and flow actions, specific events, business rules, or even client scripts taking a long time to execute. So what are some things that we can do now? I’ve broken this out into some things that we advise our clients to do daily, weekly and monthly and it’s just a way to track how a how the healthy the instance is, B what are we looking at as far as you know how long things are taking and more on a monthly basis, how things are trending, right. So one of the areas that we kind of look at is system diagnostics. So system Diagnostics is an area inside of service now that gives you error information about each of the nodes in your environment. Michael Dugan 30:10 Hey, Brian. No, we can’t see your screen there. Bryan Schrippe 30:11 Yeah. Oh, did I just shared the presentation? Michael Dugan 30:13 Think you might have to? Bryan Schrippe 30:14 Oh, that is my fault. Michael Dugan 30:15 I think so. Yeah. I just have to. Bryan Schrippe 30:16 I apologize guys. Michael Dugan 30:17 I just all good. Demian interrupt. Bryan Schrippe 30:20 Thank you, Dugan. Appreciate it. Alright, can you all see it now? Michael Dugan 30:25 You bet. Bryan Schrippe 30:26 Apologies. All right. Thank you guys. So this is system diagnostics. So basically maintains 2 it based on your system architecture. It’ll let you know how your nose are doing, so things I typically take a look at are memory usage, transactions and errors, and just because I’m a little bit. I guess I’m a perfectionist. I I tend to take a look at these every day and notate what the numbers are every day for various environments. Just because I’d like to understand like if I have maybe five or six errors, but then I’ve jumped to like 5 or 10,000 errors, then maybe I’ve got something going on in the environment and things that I need to take a look at. And as well I like to take a look at the uptime and if there’s any scheduler queue length, so that is for anybody that understands the back end for the ecq. If there’s any type of queue slowdown, they’ll notate itself here and then same thing about your email event Q. Bryan Schrippe 31:38 So there you’ll see, at least in my environment, you know there’s no emails being sent, but it’ll let you know if you’re having email issues. What the usage is, it’s a really good utility just to be able to look at it on a daily basis. API usage is another area that I like to take a look at, and this is specific to companies that allow a third party integrations, especially from areas of you know if you’re doing it from a customer perspective or you’re doing an integration via rest, soap. Umm, companies that are actively accessing that information and pulling that issue, pulling that data back consistently. If you don’t have them set up right, can cause performance slowdowns. So there are different tools that you can look at which is usage by request. There, there’s also a couple other dashboards that you can take a look at that specific to API usage where it will tell you what tables they’re hitting, who’s hitting those particular tables, and how long it’s taking for particular requests to be facilitated by the system. So if you’re seeing like, OK for the table API, maybe we have, maybe we have one requester that’s making a bunch of requests at a very given moment. Well, maybe we need to understand why that’s happening. It could be causing the slowdown because as they’re trying to get information, the system may not have enough resources to service that information up to customers, analysts, admins. So you need to be careful with how often rest APIs are getting triggered. Transaction logs. So I’d like to take a look at transaction logs just to understand you know, what response times are looking like across the environment. A lot of this is API based, but I like to take a look and see what the highest response times are and start taking a look at what is exactly causing those response time slowdowns like 29 K milliseconds isn’t that terrible, but if you start seeing stuff in like the hundreds of thousands of like down here like 180, maybe 205 hundred, you might wanna take a look and why those are being slowed down. You know, maybe it’s how they’re querying the data, what they’re querying the range of, how they’re querying it. It’s not being batched. There’s a lot of things that we need to take a look at in order to understand that. So areas to review weekly, O scheduled jobs. Understanding what scheduled jobs are running on a weekly basis, how long they’re taking this specific this specific filter is any scheduled job that has a response time of greater than I think an hour. So this is just me taking a look at you know what scheduled jobs are out there understanding how many are taking a long time because as they’re taking a long time, it’s indicative of how much data they’re trying to go grab or modify or clean up. And it could indicate some sort of issue inside of the environment that we need to further investigate. And then again I I schedule that transaction log down to just the top 20 transactions just so I can understand like, hey, what are the top 20 that have happened this week to understand you know, maybe there’s something going on with an automation or an API event or a table execution, business rule, client script just so I can understand maybe what’s going on there and do some digging. O Errors, Review monthly monitor your table growth. So take a look at some of the most commonly used tables like your task table, your incident problem, change table request tables, your configuration item tables, and make sure that your log tables make sure they’re not getting too crazy high. And if they are, then you might need to do some auditing of the an archiving of the data across or you know remove stale data if it’s not being actively managed or monitored. Bryan Schrippe 36:15 Uh, I like to look at the performance dashboard monthly, so this is a dashboard that’s inside of service now that is strictly for performance. Must not have had the wrong link in there, but this is just the. The 30 day graph that basically lets us know how our systems working overtime so you can see how business rules are being tracked. What concurrencies are happening? What? How the network traffic is you know your database usage and then you can see like oh, you can look at transaction usage. Bryan Schrippe 36:58 So if you have a high spike in transactions, you might want to take a look at that and see why that happened. Maybe there was something going on. Same thing with response times. Like if you see there’s well, extremely long response times happening for a given point of time, you might want to dig into why that happens. You know, it could be indicative of a rogue script. User session session with queues. Semaphore use all of this information is really good at just taking a look at monthly just to just to even look and see if you have extremely high spikes in usage and just identify maybe what may be causing those spikes. This is a nice this is a nice little area that I like to check out too. Monthly and that slow scripts. So this takes a look at scripts that may be taking. And again, this is all just demo data. When you take a look at this, you would need to segment this to just maybe scripts that either you have created or somebody has created that may not be service now, but in this area you can take a look and see if there are particular scripts that are taking a long time to actually execute, and maybe there’s a problem with code. Bryan Schrippe 38:09 Maybe there’s a problem with how it’s executing. It’s a really good mechanism to benchmark against your scripting as far as customizations are concerned, and then slow transactions. It’s just basically a rehash, but monthly index suggestions. So this is another area service now will actually suggest to you whether or not a table should be reindexed and going through and making sure that you’re actively maintaining those indexes is a good thing to do over a period of time because it streamlines table performance. And then sys user row count. So this is actually something that’s really interesting. So by default, ServiceNow sets their row counts at 20 for anything on the back end of the system. Bryan Schrippe 39:03 Now let’s say you have a large environment. Maybe you’ve got 500 to 1000 people using ServiceNow and let’s say everybody in that environment sets their row count to 100. So it’s not only returning all of those rows, but it’s also returning the review information along with relational data. And if all of those people are hitting the system at any given moment, it could be overwhelming to pull back that data for a large subset of users. So what I do is I come here and I take a look at this particular area and validate which users have set their preferences to something other than what the default is, and then if it’s like a massive amount of people then we may wanna just herb that or change it back to the default. Or, you know, you can have conversations with them to have to help them understand that there are penalties that come with massive row count village. OK, so sometimes a lot of our clients have upgrade issues. They are seeing failure messages during their upgrades, incompatibility errors with, you know, maybe some. Plugins or modules that they should be maintaining. Custom script failures. Integration failures, UI and form issues, from customizations that they’ve had, and maybe there’s been a change on Servicenow’s release that has bought something from a client side perspective scripting and glide API changes UI and form issues, so some solutions there. Umm what? I I’d like to do is I’d like to at least monthly run in instance scan and what this instance scan does is it will go and run this almost 600 different tests against your instance and let you know things of like code violations, update issues. It’ll let you know what priority issues are in place. You know, just with some demo data you can see here like there are, you know some low issues. There are some high issues and then you can come in and you can segment those out and it’ll let you know if there are script issues. It’ll security issues and in order to get those rectified, it’ll streamline some of your upgrades because this will keep some of that those issues in check before you actually upgrade. And then the other one is upgrade preview. So this is something that’s one to two year relatively new, but having the ability to preview your upgrades and proactively mitigate anything before the upgrade is something that I encourage all of our clients to do when they’re doing upgrades because you can see what skip logs are going to be presented to you before you upgrade. And as you work through those skip logs before an upgrade, you can proactively mitigate anything being either updated, modified, changed. It will only streamline the upgrade process for you as you do your upgrade, so skip log run throughs are extremely important and then again skip log review. So making sure you’re using upgrade preview and running through your skip logs to ensure that anything that should be updated is being updated and anything that you need to update is updated before you actually upgrade. All right. And with that, I’m going to pass it over to my partner in crime, Dugan, to go through some script performance. Michael Dugan 43:25 Thank you. Thank you. I’m gonna take over here. So yeah, what I really wanted to hit on, that’s the optimal script before this hasn’t let me just say this isn’t going to let me get this out of the way. This isn’t gonna talk about how can I make my scripts faster? This is going to touch on what should we be doing? Where should we be doing this? Dos and don’ts. Just a lot of good stuff packed in one slide. So, umm things that I’ve seen in the past that I like to keep on is no defensive programming. So when I say that what I really mean is know what you’re expecting and code for it, a lot of us I’m sometimes guilty of it just to get something up and running, but we kind of just script it out just to do it and it just does what it does. We don’t really wrap anything around it and kind of say, well, what if this happens or ooh, is this really gonna do what’s gonna do? We need to make sure that we code for it and get in and get out. So automation is great when we have a start and finish and it’s clear it’s defined and we’ve tested it and it’s routinely doing what’s doing. So we have to be very explicit about that. We have seen loops and recursion. Uh. If if you have to this is 1 method you can rely on but highly recommend against us. What this basically is is if you’re trying to recursively going kind of up the ladder up the staircase to find a piece of data that you might need, or if you’re looping through normal amounts of records that you normally go through, just set a limit or just work with that data set in a very specific manner. Just control yourself and make sure that you do what you need to do it. Don’t do anything else. Umm. Kind of speaks more to the testing phase, which I’ll get down to eventually. Here. Uh, I think Brian already touched on this. So this kind of speaks to limiting your data set. So with those large customers having that ginormous task table and and growth there little what you know the glide record query is that most of your developers are using that actually returns the entire data set to you. So we can limit that we can make the database do the work for you. Sort just limit what it’s returning and in turn that will actually make your performance increase within the instance. So the more you limit your results, the more performance you’ll see. It will shine. So how they recommend that and I’m kind of listening some of the symptoms here, we’ll get to solution shortly. So retrieving rich objects I used to do this in the past. When you’re getting things from service now, it gives you a robust object, so you can do things with it like methods and what can’t you do. But really, in reality you just need like the value, right? So just the string value and not the entire thing, so the entire object holds memory. And speaking to some of Brian’s past slides where this may be increasing just low times and execution statistics and everything, so all we really need is just get that one value and get out. So staying away from using the entire glide record object is is recommended there and I used to do this way in the past. I am very. I’m very particular about this now, making sure you have unique variable names. I can’t tell you how often times I’ve gone in and I see the scripts out on the community. Everybody uses GR for short for glide record. If you can use something based on your last name or something very long and unique so that we don’t have any collisions or any any weirdness going on when we have other runtime scripts running in the background. So I kind of hit on a little bit. I like to use this term pseudo code. We do this with our normal implementations as well. Just write out you know whether it’s code comments or just document what you have to do and make it known that you have to do ABC and that kind of just go for it. The more you define the clear vision you have, the better your outcome will be. So I love just going in and code comments usually or just writing user stories or whatever I need to do to make sure that I do it correctly and then don’t inflate it with the inflate it with anything there. I wanna share it is I wanna go to the documentation article set limit is huge here. So I want to click this. This was getting through the large data sets being returned. What this does within the glide record API? I’m not going to touch on this for too long. It’s pretty technical. This limits the records returned, so like I said, you can make the database do the work for you and this drastically improves performance. Umm, you can kind of see without this one line of code here the entire datasets returned and it’s just sometimes hard to work with. Kind of gets in the way of what you’re trying to do with developments and really all you need is just maybe just a sample set or if you’re expecting just a single record, why not just tell, hey, just give me one. And if you can sort it in a certain way to get that filtered down, that would make sense. So we should do that. Uh, it looks like I have another one. So and I think that’s it. So a really good resource I like to hallways call out whether you new to this or you’ve been doing it for a while and just kind of test your knowledge. The developer site from ServiceNow is great resource just for starting out. They do have now learning for training, but what this is really about is making sure you kind of see it on the the categorization over here best practices you know. Are you doing things the right way and adding comments to your code and using white space so it can be legible and all the good stuff? This is free resource out there. This is in our slide deck that we’re sharing out just things to do and watch. Watch out for that. You shouldn’t do so. Big resource. I sometimes come back to this to uh, you know, come back to some of the topics I talked about here and great resource can’t recommend it enough. So automation design, so really this is more like use the right tool for the job. I know a lot of folks come from the background of workflow with the graphical activities and arrows and everything and and there are tools for certain reasons. But with ServiceNow introducing more current technology, you know they’re optimizing for certain situations. It’s more efficient, it’s easier to use. We’re going to touch on flow designer shortly here. That’s the low code. No code tools that they’ve been practicing and preaching huge, so if you can get away from sort of legacy concepts and and technology, ServiceNow is definitely looking obviously forward and wants to make sure that your instances fine tuned for whatever you’re gonna throw at it. Uh, and in this case, you know, if you’re actually developing some of those legacy tools, sometimes you have to make duplicates of of whether it’s code samples or just pieces of business logic and you can’t reuse it. What I’m about to talk about with flow designer is actually really huge. You can actually just kind of like source control. You just make something in one spot and everything kind of points to it. So it’s it’s a maintainability piece and you can reuse it. Any efficient flows? I think we’ve seen in I’ve been guilty of this too, but I know with Vancouver there’s some handy features coming out, having numerous, I know Brian talked about this numerous steps in a flow, maybe upwards of their Max, which is traditionally 50 actions in a flow if you can offboard that to a sub flow and kind of have a modular component run that for you kind of off site then that’s even better. I know they’re starting to introduce some performance increases there. You know, flow times and just execution in general, so reusability is huge there. I we have seen in some cases where customers are building just because you know they don’t have the licensing, they they can’t budget for their for their forecast in the future when in reality ServiceNow is kind of already built that and they they maintain it in house and they have something ready for you they can use. So you can time to value is huge. I’d recommend if you can working with your AE to kind of see can we fit into the current licensing plan and go from there rather than building it from scratch with components that they give you. You know rest messages and such within flow designer. So I, uh, I talked about this. I can’t remember. This even goes to but sub flows are huge. Umm, think of this as if you if any ones into Legos here they give you all the blocks and everything they give you the training guide or the instruction booklet of how to build its sub flows or kind of like compartmentalized Legos. So if we can, I’m actually going to show a little demo here, not a full and then demo, but just something that was out of box demo data here that ServiceNow had for Microsoft AD. They have instead of doing these individual actions, they actually have a a single component that has a group of actions already bundled in. So that’s where subflow shines. It’s just one piece here, one little action, but deep down they’ve actually they have other steps to run. You just don’t see them. They’re kind of packaged and bundled secretly behind the scenes, so it cleans up your flow design. See if you’re easier on the eyes. Your business analysts can even leverage these because these are repeatable. It’s defined in one spot and you don’t have to define it anywhere else. It’s kind of like like a code repository working on code in a single spot, so it’s huge. Rather than building individual steps, we can just use a subflow and do point to that, and then have the work be done behind the scenes. Back to PowerPoint here. So and I included just for folks that may not be aware of getting started with flow designer. I think this goes to the developer site again or this one is actually a community article. Really great resources than community. I’ve starting to see emergence of ServiceNow employees really doing their due diligence and saying hey, watch out for this. Do use this. This is whether it’s upcoming or a great feature that we’ve seen that’s works well for folks. They’ve mentioned some training resources pop back to the PowerPoint here because I thought I had one more in here. Maybe it included. So final thing I want to touch on here. This is huge. This is automated test framework. Yeah, commonly knows ATF and all the acronyms going around lately. What this does for you? This is a it can be an automated test suite, but think of your developers today or even business analysts running your UAT processes. A lot of testing is done manually clicking around and going here and setting up some test data or mocking a situation. Well, service now has a solution for that. ATF is huge. It’s basically taking, yeah, it, whether it’s so consider this if you’re a new product owner or if you’re switching organizations or whatever you come into a brand new environment. How are you gonna know that? Whatever you make or whatever you design is going to succeed or fail. So with this we have, you know, increased confidence if you leverage this, it’ll tell you, hey, this failed and here’s why and going and resolving that ahead of time before we go to production is huge. So we have more confidence by leveraging this tool. It gets rid of manual testing, so that’s kind of why I want to point that out. This is awesome too. This is basically like I just said with code repos. It’s that concept of having something in one spot, and this is where you maintain it. Service now has test suites. They have test cases. It gets down to an individual test level. There’s so many features to this tool, they even have parameterized like parameters you can pump in. So if you’re telling me that, hey, I’ve got these custom scripts and I don’t think I can do this because it’s. So it’s there’s just so much to it and it’s so complex. I’ll I’ll disagree because they have parameterized testing where you can pump in mock values and you can kind of assert the situation and and truly simulate what you have built for your organization. And ATF has just it’s a great way to test all of that. Does take a little bit of time to get up and running, and you may be intimidated by, you know, do I need training? It’s very simple, it’s easy to use, it’s intuitive, highly recommend using it, but keeping your test cases in one spot, and whether you want to schedule them to automatically run, whether it’s every business quarter or every month or every day, maybe you can definitely do that and then to upgrade, upgrade ability and just getting that scheduled and done for your organization. Michael Dugan 54:47 This is huge. I believe the upgrade center and ATF go hand in hand, so if you go to the upgrade center page where hey is my instance right to upgrade, it will actually have some ATF tests baked in there and it kind of gives you some timings of like hey, this maybe could take this amount of time or these are some areas you need to look at because your tests are actually failing and you should look at those before you upgrade. So ATF and upgrades are they’re meant for each other. I’m just going to touch on this quickly to remain cognitive time. Uh, really, to get it going. It’s just a few properties. It’s a big splash page of properties that may be intimidating, but really you just need to make sure you’re doing this in a sub prod environment so not within production. Whether it’s likely either dev or maybe your test environment, just turning on a few properties here and getting it going, there’s a whole bunch of features for how you want the client runner to kind of grab your browser and work with it, and you may or may not want to look at those and tweak those. But just a few up top and then if you need some debugging and such. A huge with surface now is they do provide out of box like quick start tests. So I’ll kind of go to a list and show you that. So you may not know how to build these and that’s totally fine. But what you can do is sort of like cookie cutter templates. You can kind of copy what ServiceNow does, so I actually just went to one. But if you see this really list here, they have a whole bunch for like change management. They do a good job of prefixing the module, right? I guess like the product or capability in front of it, so you know, like I’m looking at this one, I know it’s HR related or change because I can see the the change record prefix there they give you really good starter stories and and unit tests kind of just copy from and what can’t you do with this, you can impersonate users, you can have it go to a service portal and click buttons, you can have it smart data you can query for data, all the stuff that you would want to do when you’re making automation or integrations or anything like that so. And again, I think this this might be the one that takes you to developer or it’s actually YouTube article I wanted to highlight that service now as support channel in tube is a great resource. They’re constantly putting out really good content. This is just a quick 10 minute guide of how do I get up and running? So and a little bit about just background, what it is. So that’s a great resource. And here we go. I’m not lying developer. Great resource. I’m constantly going here for some of their API guys, so if you go into reference you can look at some of the APIs and see what they’re expecting when you’re automating on the platform, but you can use ATF to kind of test all this here. So great robust functionality built baked into the platform. Highly recommend you take advantage of it. It’s going to increase your upgrade times and more so. I think I’m going to give it back to Brian, yeah. So your meter there Brian. Bryan Schrippe 57:45 Thanks again. So we will get going. I will share back out my screen and we Yep. Michael Dugan 57:51 Quick time check, we do have about 6 minutes here stuff. Why? Bryan Schrippe 57:54 So just to be cognizant of everybody’s time, there is way more content in this deck that everybody will get. We’re not gonna be able to touch on everything in here, but there are some other good things in here for you guys to take a look at. I’m just gonna touch on two really quick things. We touched on overly complex processes. We talked about some of the the symptoms of overly complex processes, but I wanted to touch on some of the other solutions that, umm, we we often find ourselves giving advice to clients about is document your processes in their entirety and ensure that that process is gonna be stagnant before you do anything like automate it or umm, do anything to try and speed up the efficiency. A lot of times we have people that try to automate a process and then it’s constantly evolving and changing and that doesn’t necessarily make a good candidate for automation at that point. So making sure that you actually process document that process is is extremely important. So process mapping. So create create the process in, make a flow chart. If you have an automation that’s in progress that you’re having an issue with, take a look at that process flow, chart it out and find out where there’s bottlenecks. It helps identify, you know, redundancies, unnecessary steps, maybe, where you can cut out specific areas to speed up that process, be it approvals or manual touch points or anything like that mapping out that process. And it’s entirely is extremely important. I’m developing a KPI to establish how that process is performing. Maybe it’s understanding what the percentage is between each step, or maybe what the completion is between approval and completion. Understanding a extremely complex process from an efficiency standpoint is pretty important in determining KPIs to leverage against. That is against something you should take advantage of, or even do a complexity assessment you know, develop a set of criteria or a checklist to assess process complexity you know, consider factors like the number of steps, handoffs and volume of data that you might be working through, or even the individual forms that you’re working through. And then again, automation opportunities. So finding processes that you know are extremely, you know, straightforward, you’re doing them all the time. They’re doing them well. They don’t have very many bottlenecks. Those are your key candidates for automation, and the more you can get them into something like flow designer or workflow, it’s going to be extremely beneficial to you. All right, now I’m going to skip around because I want to talk about one other thing really quick and I want to talk about Shadow development. So if you’re experiencing, you know, and data inconsistencies, increased complexity, you know you’re having difficulty troubleshooting. There’s a little bit of loss of control inside of your environment from either a security perspective or you don’t know who’s accessing what, when, why or updating. You’re having difficult upgrades. You know you can review your audit logs. You can regularly inspect all your configuration records and. Bolster peer review and collaboration, so making sure that any code is being peer reviewed by other people and start fostering an environment of collaboration where you should have everybody be working on code and having people look over everybody’s shoulder to make sure that what they’re developing is best practice. And it’s it’s worth making that effort to foster collaboration because it can only allow your developers to be more free and how they do their development. And then we also have, we also have a tool that we’ve been developing in House, it’s called fixed ants. But basically it’s an amalgamation of a bunch of different tools, but the one thing I wanted to point out here is we actually go out and look at every single application inside of the tool, and also what’s been updated inside of each of those applications. So you can actively monitor and audit them to see if something like shadow it is happening or shadow development is happening in your environment because the more you can get ahead of it, the better and it’s better to help somebody you know understand that what they’re doing isn’t exactly what we want them to do. Bryan Schrippe 1:02:36 And they need to follow the process. But we’ve been developing this and there are some other really cool things in here that tie, you know, your ITSM processes, the financials, to find out how much things are costing your environment. Umm, but it’s something that we put in new environments to help us kind of understand how instances are performing, how instances are being leveraged. You know, help us backward engineer. You know, when we come into organizations that are having issues to try and help them through determining what they can do to fix them. OK, so Nope, not that. All right. So the the last things I want to touch on real quick are when to start over and when to fix so. I’m just going to run through this and we can go through this. A little less detail, but when we’re talking when to fix, versus when to when to replace, or when to start over, you know the big, the big thing to consider there is that. Bryan Schrippe 1:03:47 Things like minor issues like you’re having script issues you’re having, minor usability issues you’re having, you know, small process changes, you know that is a use case to, you know, fix things. You have limited budget. You don’t have a lot of money. You need to do things rather quickly. Time constraints. We need this up and running and fixed and you know a matter of weeks and months, there’s already, you know, extremely complex processes and existing data in place that you need to maintain. And then it makes it financial more sense to bring in the right people to help you fix what’s there. And then employee expertise, right? Like maybe you don’t have the right people. Maybe there was an M&A and people got lost or knowledge left to build thing through layoff or whatnot. You know you don’t have the necessary expertise. Then it might be who to bring in somebody to help you with. Fix some of those issues, O and then there’s just some additional thoughts and considerations that I really wanted to touch on when it’s when it’s bright to start over. So there’s fundamental flaws. So your data model is messed up from the get go, right? You have extreme data problems. The data model isn’t exactly what you had set out it to be, but you have data existing there and you know it. It’s it’s really causing havoc. You know your users aren’t adopting the platform at all, right? There’s just a really real and satisfactory user experience there. You have security issues, right? Like maybe the security roles, responsibilities groups, not all of that was set up in properly, and then you’ve got shadow IT development issues going on all over the place. We’ve seen that happen too. Our complexity and technical debt right, like maybe the environment has become too complex. You don’t have the resources to manage it. You don’t have the team to go ahead and backwards. Reverse engineer everything that’s been done. Maybe it just makes sense to just restart from scratch and redo everything to make it what you need it to be right then and now. And then the lastly, a shift in business requirements, right, if if the business has made it, you know mandated that ohh, maybe we’re not gonna do ITSM, maybe we’re going to move over to you know customer service management because we’re it’s a fundamental shift in our business. Maybe that’s a good opportunity to start over from scratch and redefine how you’re doing business in the platform of service now and then these are just some thoughts and considerations that you guys can take a look at within the actual guide itself. There’s some additional info in the note section for some of that. Umm. But lastly, how can we help? No, sorry skipping around here. To to do all right? So how can we help? We can do four different things. We can do an envisioning session or assessment, so if you’re thinking about fixing a bunch of issues or starting over, you know we’d be happy to to help take a look. We offer an assessment to go through your environment and actually take a look at a lot of those issues. We can help you define business cases or use cases to make sure what you’re doing from either process or program perspective is congruent with what your business needs and how you are effectively managing your ServiceNow environment. We can also do you know Rd mapping sessions and actually road map out what you’re gonna do in the future? Umm. If you’re having extreme issues, we can help you develop a remediation plan to kind of work through all of those issues and develop a backlog for people to work through. Develop an operational plan. You know, if you go through an M&A or your you had layoffs, you’ve had new people take over the tool, we can help offer umm guidance in how to operate the tool, what roles and responsibilities need to be identified, who needs to do what or even identify what new features you’re looking to take advantage of. Maybe there’s some new things in service now you’re trying to take a look at and that can backfill or maybe streamline some of your processes. We also do fixes or implement full full blown implementations, so fixing an existing deployment or deploying brand new capabilities. Umm, we can we can do remediations we do staff augmentations with backlog support. Bryan Schrippe 1:08:19 We can optimize environments and then if you’re trying to do something, brand spanking new, like if you’re looking into Vancouver and you see the brand new AI capabilities that are coming out there. Now we’re at the forefront of all of that, and if you’re looking to bring some of that net new enhancements on, we’d be happy to help. So with that, I know that was a lot of information and there’s even more inside the presentation. So if we will give that all out to you and I thank you all for your time and I appreciate you guys staying a little bit over, it’s much appreciated. Michael Dugan 1:08:57 It’s gonna say we’re 5 minutes over, but thank you all for your time. Michael Dugan 1:08:59 Thanks for joining.
Blog Create a Simple Ticketing System in SharePoint Online I was recently tasked with creating a simple help desk ticketing system in SharePoint Online and Office 365 for a client. While I fully understood from the start that OOTB functionality in SharePoint wouldn’t provide the best means for accomplishing this task, I was pleased to discover that it is surprisingly capable of offering the… Feb 12, 2019 Concurrency
News Nathan Lasnoski: The Unstoppable Microsoft MVP for 14 Consecutive Years! Concurrency, Inc. is proud to announce that Nathan Lasnoski, the company’s Chief Technology Officer, has been named a Microsoft Most Valuable Professional (MVP) for the 14th consecutive year. This prestigious award recognizes Lasnoski’s exceptional expertise and contributions to the technology industry, specifically his work with Microsoft Azure. Being named a Microsoft MVP is a remarkable… Jul 12, 2023 Concurrency Blog