background image

Part 02

Change Vs Release

Speaker  1:  Change  enablement  and  release  management  are  two  sides  of  the  same
coin.  One  keeps  risky  alterations  under  control,  the  other  moves  approved  work  into
production.
Speaker 2: In practice you need both disciplines working together so that updates land
smoothly without disrupting users.

background image

Speaker  1:  Change  enablement  focuses  on  assessing  risk  before  any  code  or
configuration shift happens. Small tweaks might be pre-approved, while major ones get
a thorough review.
Speaker 2: The goal is to prevent nasty surprises in production. It acts as a safety net
so development teams can't accidentally take down critical services.

background image

Speaker  1:  Release  management  picks  up  once  a  change  is  approved.  It  bundles
related work into versions and schedules them for deployment.
Speaker 2: Think of it like publishing a magazine. All the articles need editing before the
final  issue  goes  to  print,  and  everyone  should  know  exactly  when  it's  hitting  the
shelves.

background image

Speaker  1:  When  change  enablement  and  release  management  work  in  harmony,
teams can ship quickly without sacrificing stability.
Speaker  2:  It  becomes  clear  who  approves  what  and  when  new  features  will  appear,
which keeps stakeholders confident that the process is under control.

background image
background image

Cmdb

Speaker 1: A configuration management database, or CMDB, is the master inventory of
systems and services.
Speaker 2: It lists configuration items and how they depend on each other so support
teams have the full picture.

background image

Speaker  1:  Start  by  identifying  key  CIs  such  as  servers,  applications  and  network
devices.
Speaker  2:  Pull  in  attributes  from  source  systems  and  link  them  together  to  show
relationships.
Speaker  1:  For  example,  the  "web01"  server  might  depend  on  the  "db01"  database
service.

background image

Speaker 1: Change requests should list the configuration items they will modify.
Speaker  2:  After  approval,  update  those  CI  records  and  compare  discovery  results  to
catch any unplanned drift.

background image

Speaker 1: Keeping the CMDB accurate is a continual task.
Speaker 2: Automate discovery and update records after every approved change, then
audit regularly to find gaps.
Speaker  1:  Compare  the  observed  state  from  discovery  tools  against  the  approved
configuration to detect drift.

background image

Speaker 1: Sometimes discovery shows a system in a state the CMDB never approved.
Speaker 2: This can come from emergency fixes, admins bypassing change control,  or
forgetting to remove retired equipment.
Speaker  1:  When  observed  and  authorised  states  diverge,  troubleshooting  and  audits
slow down because teams can't trust the data.

background image

Speaker  1:  A  well-maintained  CMDB  accelerates  incident  troubleshooting  and  change
planning.
Speaker  2:  It  reduces  surprises  from  hidden  dependencies  and  becomes  your  single
source of truth.

background image
background image
background image

Continual Improvement

Speaker  1:  Continual  improvement  is  the  heartbeat  of  any  successful  IT  service
organization. It's not a one-time project but an ongoing commitment to making things
better every day.

Speaker  2:  This  systematic  approach  focuses  on  creating  value  for  customers  while
learning  from  every  incident,  change,  and  interaction.  Think  of  it  as  evolving  from  a
reactive firefighting mode to a proactive enhancement culture.

Speaker 1: The goal is simple: deliver better service tomorrow than you did today. But
achieving  this  requires  both  structure  and  the  right  mindset  across  your  entire
organization.

background image

Speaker  1:  There  are  two  main  approaches  to  improvement:  Kaizen  and  formal
improvement  programs.  Kaizen  comes  from  Japanese  manufacturing  and  means
"change for the better."

Speaker 2: In Kaizen, everyone makes small daily improvements. A service desk agent
might  streamline  how  they  document  tickets,  or  a  network  admin  might  create  a
checklist  for  routine  tasks.  These  small  changes  add  up  to  significant  improvements
over time.

Speaker  1:  Formal  improvement  programs  are  bigger  structured  projects.  Think  of
upgrading your monitoring system or redesigning your change approval process. These
need dedicated resources and project management.

Speaker 2: [enthusiastically] The magic happens when you combine both approaches.
Kaizen keeps the improvement mindset alive day-to-day, while formal programs tackle
the bigger transformations your organization needs.

background image

Speaker 1: The Plan-Do-Check-Act cycle is your roadmap for systematic improvement.
It's like a scientific method for making changes that actually stick.

Speaker 2: Plan means identifying what needs improvement and designing a solution.
Maybe you've noticed that password reset requests are taking too long, so you plan to
implement a self-service portal.

Speaker 1: Do is implementing your solution, but start small. Roll out that self-service
portal to one department first, not the entire organization. This lets you test and learn
without major disruption.

Speaker 2: Check means measuring the results. Are password resets faster? Are users
happy with the new process? Are there unexpected issues you need to address?

Speaker  1:  Act  is  where  you  decide  what  to  do  next.  If  the  pilot  worked  well,
standardize it across the organization. If it didn't, learn from what went wrong and try a
different approach. The cycle then begins again.

background image

Speaker 1: Maturity models are like a GPS for your improvement journey. They help you
figure out where you are now and chart a path to where you want to be.

Speaker 2: Think of it like learning to drive. You start with basic skills like steering and
braking, then progress to parallel parking, and eventually to driving in complex traffic.
Each level builds on the previous one.

Speaker  1:  In  IT  service  management,  maturity  models  assess  how  well  your
organization manages services. A level one organization might be reactive, fixing things
as  they  break.  A  level  five  organization  predicts  and  prevents  problems  before  they
impact users.

Speaker  2:  The  key  insight  is  that  you  can't  skip  levels.  You  need  solid  incident
management before you can do effective problem management. You need good change
control before you can implement continuous deployment. It's about building capability
progressively.

background image

Speaker 1: ITIL 4 maturity assessment looks at four key dimensions. Think of them as
the four legs of a table - you need all of them to be strong for the table to be stable.

Speaker  2:  Capabilities  are  what  your  organization  can  do.  Can  you  resolve  incidents
quickly? Can you implement changes without breaking things? Can you plan capacity to
meet future demand?

Speaker  1:  Practices  are  how  work  gets  done.  Are  your  processes  documented  and
followed? Do you have standard operating procedures? Are your workflows efficient and
effective?

Speaker  2:  Governance  covers  decision-making  and  oversight.  Who  has  authority  to
approve  changes?  How  are  resources  allocated?  How  is  risk  managed?  Is  there  clear
accountability?

Speaker 1: Culture is about values and behaviors. Do people share knowledge freely? Is
there a blame-free environment for learning from mistakes? Are teams collaborative or
siloed? Culture often determines whether your improvements will succeed or fail.

background image

Speaker  1:  You  can't  improve  what  you  don't  measure.  But  measuring  improvement
success goes beyond just technical metrics - you need to look at the whole picture.

Speaker  2:  Service  performance  metrics  are  the  obvious  starting  point.  Are  your
incidents  being  resolved  faster?  Are  there  fewer  outages?  Is  system  availability
improving? These operational metrics show if your processes are working better.

Speaker 1: Customer satisfaction scores tell you if your improvements actually matter
to users. Sometimes technical improvements don't translate to better user experience,
and sometimes small changes make a huge difference to customer happiness.

Speaker  2:  Employee  engagement  levels  are  crucial  but  often  overlooked.  Are  your
team  members  more  motivated?  Do  they  feel  empowered  to  make  improvements?
Happy employees deliver better service, so this is a leading indicator of future success.

Speaker  1:  Finally,  business  value  delivered  is  the  ultimate  measure.  Are  your
improvements helping the organization achieve its goals? Are IT services enabling new
business capabilities? This connects your technical work to real business outcomes.

background image
background image
background image

Metrics Reporting Dashboards

Speaker  1:  Dashboards  are  the  way  we  turn  thousands  of  ticket  updates  and  log
records into a single page the CIO can understand at a glance.
Speaker 2: When designed well, those tiny coloured boxes and trend lines become an
early-warning system that saves you from unpleasant surprise incidents.

background image

Speaker  1:  Metrics  act  like  a  health  check  for  each  ITIL  practice.  If  incident  backlogs
spike, the numbers will show it long before end-users start shouting.
Speaker  2:  They  also  give  us  proof  when  things  are  improving.  A  downward  trend  in
mean-time-to-restore beats anecdotal "I think we're faster now" any day.

background image

Speaker  1:  When  picking  KPIs,  start  with  what  the  customer  actually  cares
about—availability, response time, resolution quality.
Speaker  2:  Mix  in  leading  indicators  like  change  success  rate  so  you  can  act  before
outages happen, but avoid the fifty-metric dashboard that no one reads.

background image

Speaker  1:  Start  with  a  single  pane  of  glass  that  pulls  ticket  data  from  ServiceNow,
uptime from your monitoring stack and config context from the CMDB.
Speaker  2:  Then  layer  on  traffic-light  widgets—green  for  on-target,  amber  for  at-risk,
red for breach—so any stakeholder can see the story in three seconds.

background image

Speaker  1:  Numbers  alone  rarely  move  people.  Pair  every  chart  with  a  one-sentence
takeaway—“We cut P1 resolution time by 30 % last quarter.”
Speaker  2:  That  narrative  makes  the  data  memorable  and,  more  importantly,  sparks
the discussions that keep the improvement loop turning.

background image

Speaker  1:  Dashboards  only  matter  if  they  trigger  action.  Build  a  cadence—daily
stand-ups, weekly ops reviews—where owners commit to fixes on the spot.
Speaker 2: Track those tasks in the same tool so next week’s dashboard shows whether
the needle actually moved. That’s when metrics become culture.

background image
background image

Problem Management Rca

Speaker 1: Problem management digs into the why behind repeat incidents so we can
stop firefighting the same issues over and over.
Speaker  2:  It's  about  stepping  back,  analysing  patterns  and  addressing  the  real  root
cause rather than just clearing another ticket.

background image

Speaker  1:  We  invoke  problem  management  when  incidents  keep  cropping  up  or  the
impact is too big to ignore.
Speaker 2: Think chronic outages or situations where quick fixes won't cut it—we need
a deeper look to stop the bleeding.

background image

Speaker 1: Techniques like the 5 Whys or fishbone diagrams help track symptoms back
to the true cause.
Speaker 2: By linking related incidents, we can see patterns and gather evidence from
every team involved.

background image

Speaker  1:  Once  we've  nailed  the  root  cause,  document  the  workaround  and  roll  out
changes to eliminate it.
Speaker  2:  Review  whether  those  fixes  stick.  Problem  management  is  a  cycle  of
learning and preventing future pain.

background image
background image

Slas Olas Kpis

Speaker  1:  Service  level  agreements,  or  SLAs,  spell  out  exactly  what  customers  can
expect from the support team. They keep everyone honest about response times and
resolution goals.
Speaker 2: Think of them as the ground rules for collaboration. If an SLA gets breached,
it usually triggers extra scrutiny or penalties, so they're worth taking seriously.

background image

Speaker  1:  A  solid  SLA  defines  the  services  covered,  the  hours  of  support  and  how
quickly the team needs to respond or resolve different issue types.
Speaker 2: It also spells out the consequences if those targets aren't met. No one likes
a penalty clause, but it's there to keep priorities clear when systems break.

background image

Speaker  1:  Operational  level  agreements,  or  OLAs,  sit  behind  the  scenes.  They  define
how internal teams support each other so that customer-facing SLAs can be met.
Speaker 2: Imagine the database team promising the app team a five-minute failover.
Without  that  kind  of  OLA,  it  would  be  hard  to  guarantee  the  bigger  service
commitments.

background image

Speaker 1: Key performance indicators turn these agreements into measurable results.
Response time, first-call resolution and uptime are common examples.
Speaker  2:  Tracking  KPIs  month  over  month  shows  whether  your  processes  actually
work. If a target keeps slipping, it's a prompt to refine the workflow or renegotiate the
SLA.

background image
background image