Quantcast
Channel: .conf Speakers – Splunk Blogs
Viewing all 53 articles
Browse latest View live

Banking on Success at .conf2015

$
0
0

LGO-conf2015-RGBSplunk .conf2015 is here! Not just banks, but other financial services organizations will be attending and speaking as well!  We have an exciting lineup of over 165 sessions for you to choose from. If you have a role within financial services, be sure to check out the following sessions:

Tuesday, September 22

  • 10:00 – MetLife: How MetLife is Using Splunk to Improve Customer Experience of Our Sales and Servicing Websites; Speakers – Mariya Gilyadova, Bob Jones
  • 1:00 – Northern Trust Bank: Leveraging Splunk for Tracking Business Transactions; Speakers – Arijit Das (Northern Trust), Joseph Noga (Komodo Cloud)
  • 4:15 – FINRA: Leveraging Splunk to Manage Your AWS Environment; Speaker – Gary Mikula
  • 5:15 – Orrstown Bank: Using Splunk Cloud and Anomaly Detection Capabilities To Fight a Billion Dollar Fraud Problem; Speakers – Andrew Linn, Christopher Thompson

Wednesday, September 23

  • 10:00 CSAA: Splunking the User Experience: Going Beyond Application Logs; Speakers – Diviyesh Patel, Doug Errkila
  • 11:15 – Fiserv: Using Splunk for IT Service Intelligence at Fiserv; Speaker – Robert Goolsby
  • 1:15 – Moody’s: How to Use Splunk to Detect Malicious Insiders; Speakers – Derek Vadala, Moodys; Joe Goldberg, Splunk
  • 2:15 – Financial Services Panel: Data and Financial Services: Real World Use Cases
  • 2:15 – Detecting Bank Account Takeover and Fraud Cyber Attacks with Splunk; Speaker – Gleb Esman
  • 3:15 – Unicredit: A Constant Evolution Towards Vision, Performance and Analytics; Speakers – Mirko Carrara, Stefano Guidobaldi

Thursday, September 23

  • 11:15 Finanz Informatik: Compliance for 124 Million Bank Accounts; Speakers – Dirk Hille, Michael Grabow
  • 2:15 PostFinance Ltd: How Splunk Connects Business and IT at Swiss Bank PostFinance Ltd; Speaker – Patrick Hofmann

Follow the conversations coming out of the conference:

#splunkconf

Thanks for reading,
Lauren

Lauren Wang
Sr. Solutions Marketing Manager
Splunk Inc.


Notes From Splunk .conf 2015 Day One

$
0
0

PartPavWhat a fantastic first day at my first ever Splunk global user conference, .conf15. Last night’s Partner soiree kicked off the fun, bringing our customers and partners together in the expo pavilion over some tasty conference food and free-flowing drinks. Demos everywhere, a gaming space, golf swing analytics, and even a race car – no wonder it was absolutely packed!

The first full day started today with the opening keynote in front of a visibly energized crowd in a packed hall. Over 4000 Splunk customers are attending this year, and .conf is still growing. Not surprising, since this year Splunk chalked up its 10,000th customer.

The keynote was fantastic, among the best I have seen. Dynamic and informative, with a bit of fun and lots of customers, but no BS, no history lessons, no FUD, no preaching, no ego-stroking, no buzzwords, no dogma. Just a warm welcome and four key announcements:

  • Splunk Enterprise 6.3 – breakthrough performance, more advanced analytics and visualizations, and high-volume event collection for DevOps and Internet of Things (IoT) devices.
    ITSI
  • Splunk IT Service Intelligence (ITSI) – a new IT monitoring and analytics solution to provide new levels of visibility into the health and key performance indicators (KPIs) of IT services.
  • Splunk Enterprise Security 4.0 and Splunk User Behavior Analytics (UBA): to help organizations track attackers’ steps and machine learning to detect cyber-attacks and insider threats.

Every announcement included demos of products working, backed with actual customer stories. The new security solutions will be available next month, and the IT solutions are all available immediately. No ‘announceware’, no waiting for delivery next year.

As a former ops guy and now a student of DevOps, the IT Service Intelligence (ITSI) launch made me sit up a listen. With automatic KPIs that will handle dynamic, distributed environments with stunning, highly customizable dashboards, IT Service Intelligence goes beyond traditional monitors.

Splunk ITSI opens up a whole new space with a unique ability to collect, analyze, and correlate data from applications, network monitors, APM tools, wire data, cloud providers, mobile devices, data center infrastructure, web servers and more. Visually mapping services and KPIs over custom visuals puts context into data, uncovers new insights, and translates operational information into business impact. Already over a dozen Splunk customers are using this new solution, including Vodafone, Leidos, EnerNoc, Fiserv and Accenture – including in production.

SplunkshakeSplunk has long been a leader in operations management using big data and machine data, but this keynote also showed how Splunk is also taking the lead on making sense of machine data coming from the Internet of Things (IoT). I especially enjoyed the demo showing the potential for real time mobile analytics. The audience (in person and online) was asked to visit a web page and shake their phones, which generated a ‘shake’ data stream. This was then pulled into Splunk and analyzed in real time for shake speed, top shaker origin, and device type.

This live experiment/demo was a bit of fun, but with a serious purpose. It showed how easy it is to get data from connected devices and into Splunk, without an app, agent, or download – a powerful capability in mobility, and the emerging IoT.

On an even more practical note, it was also fascinating to learn how companies like Target are using Splunk to power their IoT insights; driving efficiency from automated industrial robots in a distribution center.

And finally, it was impressive to see how security pros can use the new Splunk UBA solution to log and trace the workflow of a breach, detecting malicious threats from insiders or outsiders (including outsiders who look like insiders, and v.v.). With a fun short film , the process of building a ‘journal’, showing the end to end lifecycle of a cyber-attack, was truly unique, and a lot of fun.

Of course, I visited many other sessions today as well. This may be the best part about .conf2015 – how many customers want to tell their stories, and how enthusiastic they are about Splunk and what it does for them. There were too many great stories to cover here, but if you follow my Twitter account (@AndiMann) then you would have seen some of the highlights.

Tonight is a party night, which I am sure is going to be awesome. I’ll post more from .conf2015 tomorrow and you can follow all the conversations coming out of .conf2015:

#splunkconf

Thanks!
Andi

Earn a Seat at the Table: The convergence of IoT and business analytics

$
0
0

When I was a Splunk customer in financial services, my team and I had a strut; we had a swagger. We were in the business of equipment finance, providing commercial leases for things like forklifts, freight trucks, and x-ray machines, but despite being in an industry that hadn’t really changed in decades, our peers saw the value of technology. When we walked into a meeting with the business, people knew we weren’t there to fix the printer; we were there to help use technology to deliver more value to our customers. We had earned a seat at their table.

If you think about it, once you’ve signed for a lease or loan, you hear from your bank for a couple of reasons: to remind you that you need to pay your bill, or that you’ve missed a payment and hell will reign down if you don’t pay immediately. The same thing applies with your health insurance, and quite frankly, these negative conversations are how most businesses interact with their customers. Visionary banks, healthcare companies, retailers, and others have realized that an ongoing, positive, higher-value conversation with customers leads to greater customer loyalty, better cross-selling & up-selling opportunities, and ultimately higher profits.

Screen Shot 2015-09-23 at 9.22.52 AM

Enter IoT. Our thesis: the lines between digital and physical businesses are blurring, and that a unique combination of IoT and business analytics will transform customer conversations, and help disrupt industries. Let’s talk through a few examples, but first, let me tell you a story.

Many years ago, my mentor had a heart attack while driving to work. His heart stopped. The car veered off the road, and hit a telephone pole head-on. The impact from the telephone pole, followed by the impact of the airbag, miraculously restarted his heart, and he lived. Imagine being able to detect an adverse health condition – heart attack, seizure, sleepiness, etc – and actually having the power to do something about it; to take action – stop the vehicle, turn on the hazards, call 911. With advances in wearables, connected cars, and real-time analytics, this isn’t just an idea in slideware, we actually have the ability to solve this problem today. Now.

For the Splunk “ninja’s” reading this blog post, you’re already imagining the query needed to make the correlation, and the notification rules necessary to drive actions. The keyword is correlation; this is an advanced correlation problem across streams of time-series machine data. Here are some examples of where we could collect data:

We have customers already collecting pieces of the data needed, so imagine the “art of the possible” if we correlate data across them. Driver & vehicle safety is one application, and is a use-case that could actually safe lives. Right now.

From a commercial standpoint, there are really interesting business outcomes within reach. For example, businesses could lower liability insurance through driver, vehicle, and employee safety; or businesses could shift to pay-per-use billing models, versus having to lease commercial equipment for multiple years at a time. Commercial leases are priced based on projected residual values of the asset, which usually takes into account a worst-case scenario for wear-and-tear. Imagine a distribution center that can pay per “forklift hour”, which is calculated based on the amount of weight lifted, and the distance traveled, which is a more accurate way to determine wear-and-tear. Similarly freight companies could lease commercial trucks based on accurate wear-and-tear metrics, which would fundamentally change their cost structure.

Screen Shot 2015-09-23 at 9.27.59 AM

Screen Shot 2015-09-23 at 9.28.22 AM

Businesses can understand the behavior of their customers (in a non-creepy way) and optimize the customer experience. For example, banks that lease commercial equipment – trucks, x-ray machines, forklifts, etc. – are no longer just reminding their customers to pay their bills every month. These banks can work with their customers: help a freight company look at their fleet of vehicles, and based on driving patterns, change fleet to lower costs; help a hospital compare how their radiology department is performing against their peers, and understand how to get better. The convergence of IoT and business analytics enables businesses to transform the conversations with their customers and pursue disruptive business models. IT teams that have earned a seat at the table with the business, are in the unique position to help drive this transformation.

Thanks,
Snehal Antani
CTO, Splunk Inc.

Notes From Splunk .conf 2015 Day Two

$
0
0

The Search party last night was a blast, but today it was back to business. And Day 2 of the global Splunk user group, .conf2015, was another excellent day.

I started with some good mates from the industry analyst community, talking Splunk IT Service Intelligence (ITSI) over breakfast. I gained intriguing insights into our customers and our market, and came away with all sorts of possible new use cases for ITSI.

But as Steve Jobs said, innovation sometimes it means saying ‘no’ to a thousand good ideas, so for now we are going to focus on fulfilling the enormous early demand from our customers for POCs. Still, we are always looking for new ideas from our customers and partners (and analysts too!), so if you have ideas for new use cases, please let me know.

After breakfast I attended the ITSI Customer Panel. What a privilege to have customers in our early adopter program explain some of the use cases they have found for ITSI, including:

  • Integrated monitoring – correlating multiple tools to monitor services, not servers
  • Proactive problem management – using ISTI’s predictive analytics to avoid problems
  • Service desk/ticket-based alerting – including rate of ticketing into problem symptoms
  • Partner monitoring – tracking and alerting on API failure rates for partner services
  • Resource planning – using predictive analytics to plan disk space, new licenses, etc.

Embedded image permalinkI also spent time in other customer presentations (it was super tough to choose from the over 200 on the agenda!); a couple of ‘Splunk for DevOps’ type sessions (I’ll be looking at this topic in-depth over coming weeks and months); a fantastic preso on how to build dashboards using the ITSI ‘Glass Tables’ features; and in meetings with ITSI early adopters and other Splunk customers.

I loved learning what our customers are doing with Splunk today, and what they are looking for tomorrow, including:

  • A large European bank looking to expand from Splunk Enterprise and Splunk Security to ITSI, with their biggest challenge being how to share ownership of the Splunk platform
  • An Australian based gaming business using ITSI as a business dashboard, showing just 6 KPIs on a big screen so everyone can see their core business status
  • A US online educational institution using ITSI to connect multiple teams with common metrics, dashboards, and language (sounds DevOps-ish!)
  • An Australian software consultancy using Splunk in their DevOps toolchain, connecting and measuring activity across Chef, Jira, Octopus Deploy, and Amazon Web Services
  • A European telco using ITSI to manage its identity and SSO environment, to improve user experience and rapidly find and fix login issues

Of course there were some more new announcements today too, some very cool new enhancements to our Mobile, Cloud, and Big Data products:

  • Splunk Analytics for Hadoop with Hunk 6.3 – to drive down TCO by using commodity storage in Hadoop, and search, correlate, and analyze both real-time and historical data using the same Splunk UI.
  • Splunk Light in the Cloud – our lightweight solution is now available as a cloud service, bringing the power of Splunk to small IT environments while eliminating time and expense
  • Splunk MINT in the Cloud – in addition to running on top of Splunk Enterprise, Splunk MINT now runs on Splunk Cloud, for enhanced Operational Intelligence with mobile data for developers, operations, and product management.

There is no party tonight, but it is Vegas, so I am sure I can find something to do! 😉 If you have any suggestions you can always reach out to me on Twitter. Make sure to stay tuned tomorrow for more from .conf 15. Although it is only a half day there are still a lot of great customer sessions going on.

And while you’re at it, block out your calendar for this time next year. All the tweets and blogs in the world cannot do this conference justice. You really should be here.

Thanks!
Andi

#splunkconf

Cheers to .conf2015 with Three Clicks and a Beer

$
0
0

ThreeClicks2.jpgTuesday was the kickoff of .conf2015: The 6th Annual Splunk Worldwide Users’ Conference in Las Vegas and it was incredible.  After months of preparation, we were ready to hit the stage for the keynote and show the audience – our customers – how much we appreciate their loyalty, their innovation, and their inspiration.  The room was packed.  The staging was absolutely impressive. The place was buzzing.  I was, and still am, in awe of the amount of work, preparation, and production needed to pull off an event of this scale. It’s just one more example of why I am so thrilled to be part of this team.

I was the third speaker in an impressive lineup of Splunkers – including Godfrey Sullivan, Splunk Chairman and CEO. I wish I had a photographic memory and could remember every word spoken by my inspirational peers, but I don’t.  What I do know is that there was one phrase Godfrey used that seemed to capture what we do at Splunk so visibly that I had to share it. He was describing machine data and how Splunk does things differently.  With Splunk, customers can:

 

“Catch it in flight and ask it a question.” 

 – Godfrey Sullivan, CEO, describing how Splunk customers can use machine data

 

Godfrey’s opening remarks were still fresh on my mind as I took the stage to talk about Splunk Cloud – the software-as-a-service deployment of our Splunk Enterprise product. While there are hundreds of features, and customer use cases, and benefits that I could have shared, I had only a few minutes to engage with the audience in the hopes that they would walk away with something memorable about Splunk Cloud. So, we came up with a theme that we thought would resonate with most people in the audience.

Who doesn’t like to celebrate – especially small victories? And how do people often celebrate these victories? Well, being that I’m Canadian and we were in Las Vegas, we thought we could weave in the idea that a frosty cold beverage would be the perfect way to engage.  So, completely energized by the presenters before me, I came on stage enjoying a Molson Canadian ready to share our key messages around time to value and ease of use.

  • How easy it is to deploy Splunk cloud?  “Three Clicks and a Beer” easy!
  • How simple it is to purchase Splunk cloud?  “Three Clicks and a Beer” easy!
  • How fast is it to start a hybrid search using Splunk Cloud?  “Three Clicks and a Beer” easy!

It was fun.  In just a few minutes, I did my best to capture the audience’s attention and leave them with a few key takeaways about Splunk Cloud – accelerated time to value and ease of use. And, maybe if I was lucky, the audience also learned more about how Orrstown Bank and AAA are using Splunk Cloud to support their business use cases, or how we’re now global thanks to our partnership with AWS, or a bit more about the security of Splunk Cloud. If you want to watch the .conf2015 Keynote, maybe you’ll feel like celebrating too.

I’m honored, and humbled actually, that over 4,000 customers and partners attended .conf2015 to learn more about using our solutions to gain operational intelligence from their machine data.  Thank you all for being part of this amazing journey – whether you were at the event, reading about the event, or just interested in learning more about Splunk solutions.

Cheers to you!

Marc

 

Marc Olesen

SVP & GM, Cloud Solutions

Splunk Inc.

Getting Smarter with Splunk; Lessons Learned in Higher Education

$
0
0

university-of-adelaide-logoSplunk has a lot of smart people working to bring you the best product experience and return on investment that we can. I am always humbled, however, when customers come back to Splunk with ideas that are brilliant, creative, and valuable… and something that we as a company would probably have never thought of ourselves. Splunk a train? We got that. Splunk a plane? We got that. Splunk an automobile? We got that too.

Which is why the potential of working with the best universities on the planet is so exciting – once these folks understand and explore the power of Splunk, the ongoing transformation of the research and teaching institutions will accelerate in ways we can only guess at now.

U_Adelaide_Utilization_DashJust last month at our annual conference, .conf2015, we had multiple presenters from universities across the globe talking about how they use Splunk to improve their security profiles and enhance IT operations. My favorite one, though, might be the one titled “Splunking IT Data Is Great, Splunking Non-IT Data Is Awesome” by Mat Benwell from the University of Adelaide in Australia. Want to unleash your data? Connect your IT data with data from other departments or silos in your organization.

Closer to home, you might have gotten some additional insights from Allen Tucker of Indiana University. He was at the Internet2 Tech Exchange two weeks ago talking about how his department uses Splunk to help manage IT policies across multiple physical campuses.

educause2015
I’m looking forward to Educause and learning more from all of you about the value you are getting from Splunk – you truly drive the innovation that makes this an exciting place to work!

Thanks,

Jennifer Roth
Director of Higher Education
Splunk Inc.

Bringing “Sexy Back” to IT Ops. An EMEA view on .conf2015

$
0
0

As I write this, I’m on a train into London and back in a cold, foggy, slightly chilly UK following September’s .conf2015 in Las Vegas. It was a pretty bumper week with around 4000 people in the MGM Grand hearing hundreds of fantastic customer stories, new product announcements, a huge partner pavilion and some great Splunk stories being shared over a drink (or two…). This year’s event generated some great buzz with #SplunkConf trending on Twitter during the keynote. From an EMEA perspective, we had three customer testimonials in the opening hour from BMW (using Splunk for IoT), Otto Group (using Splunk for business analytics) and Vodafone (using Splunk’s new IT Service Intelligence product). We also had customer speaking sessions from Swisscom, PostFinance, DATEV, Vertu, Finanz Informatik, Yoox, UniCredit, Bosch, Gatwick Airport and Koncar (with Infigo).

 

I wanted to write up a summary of the conference from an EMEA and IT Operations perspective.There’s a great blog post series from Splunk’s very own Andi Mann on day 1 and day 2 of .conf. My colleague, Matthias will be writing a blog post from the security angle including the launch of our new User Behavior Analytics product (from the recent Caspida acquisition). The big news from an IT Ops perspective was the launch of Splunk ITSI (IT Service Intelligence).

The goal of ITSI is to take a machine data-driven approach to monitoring to deliver an ITOA solution that abstracts out an increasingly hybrid and complex technology to deliver business context.

 

The reason for the title of the blog relates to our ITOA lead, Jonathan Cervelli (“JC”) arriving on stage to the Justin Timberlake (“JT”) song and promising to bring “Sexy Back” to monitoring and ITOA. Jump to 1:14:40 to hear JC talk about the launch (or feel free to watch the whole keynote!)

 

We were very lucky to have Vodafone as a launch customer for ITSI (you can now download their case study of how they use Splunk ITSI) and you can see their interview with Silicon Angle’s TheCube here:

 

I put together a Storify page to sum up some of the social media from the EMEA customer sessions and I wanted to just pick out a few EMEA IT Ops customers:

Vertu, the luxury phone manufacturer, who spoke about how they use Splunk for ensuring the quality of their software releases. They also discussed how they get alerted if one of their customers has a software issue or crash on their phone. You can see Rob Charlton of Vertu talk about their use case here:

There is an in-depth article that featured in Tech Week Europe that describes Vertu’s use case and their presentation from .conf can be found online.

 

UniCredit, one of Europe’s largest banks for ITOA across their banking operations. UniCredit manage multiple terabytes of machine data in Splunk from over a hundred different sources including their ATMs, mobile and internet banking services. They showed some great data visualizations of real time banking analytics and how the same data can be used for multiple purposes:

UC_Data

UniCredit’s case study is available online and so is their presentation from .conf :

 

DATEV is the last customer I wanted to mention who are using Splunk to get Operational Intelligence about their online tax and legal services. Splunk supports DATEV’s ITIL and ITSM programs and they are getting insight and ITOA from a wide range of data (over 400GB a day) from sources including middleware, VMWare, CICs, mainframe (z/OS), firewalls, SAN, DB2, MS SQL, Windows, routers, switches etc. The business outcome is that they now have real time monitoring of their online and web services. This allows them to improve the RASP (Reliability, Availability, Scalability, Performance) qualities of key services, proactively spot incidents and to visualize their customer’s journey. Below, you can see an incident being detected in Splunk:

DatevSpike

If an incident does occur, then by having a common view of IT in Splunk they can reduce mean time to repair (MTTR) & mean time to investigate (MTTI). DATEV’s presentation is now available if you want to read more.

 

Stay tuned as Matthias will be blogging about the organisations from EMEA who are using Splunk for security in the next couple of days.

As this year’s .conf t-shirt says “I’m a Splunker, AMA” so any questions then please pop them in the comments box below.

tshirt

As always, thanks for reading.

Using Splunk – It’s a Revolution!

$
0
0

Revolution awards

I’m still coming down from the high that I experienced at .conf2015 a few weeks ago in Las Vegas. It was an outstanding event—from the great customer presentations, to the new product updates and the Search Party (the silent disco was a highlight!). That said, not much can compete with the honor I had in presenting this year’s Splunk Revolution Award Winners.

If you’re not familiar with the Splunk Revolution Awards, the awards were established to distinguish the “best of the best” among our customers and hopefully inspire others in the process. These are folks who share their stories and I’m blown away by what they’ve been able to accomplish with the Splunk Platform.

There was so much goodness that I could easily blog on each of the winners, but in the spirit of time management, here are a few highlights:

  • Simon Balz and Mika Borner
(LC Systems—Switzerland) – collaborated to develop the Alert Manager app, which works as an extension on top of Splunk’s built-in alerting mechanism. Incidentally, they also won the Splunk Apptitude contest with the Hyperthreat App Suite, which uses risk scoring to identify insider threats.
  • Khalid Ali (Symantec) – built a SIEM with Splunk software, then worked across Symantec’s IT Ops, Apps and Analytics teams to prove the value of the Splunk Platform as an enterprise solution.
  • Frank D’Arrigo (AAA Western and Central New York) – took first place in the Innovation category of Splunk’s Apptitude contest with his PRI Capacity app. Thanks to Frank, AAA uses Splunk Cloud to improve customer service by monitoring the company’s call management system. Frank developed a self-monitoring, self-diagnostic, and self-healing Splunk solution that helps AAA mitigate the risk of telephone busy signals across the Western and Central New York territory.

Revolution Award winners at the Partner Soiree

Revolution Award winners at the Partner Soiree

You can check out the full list of winners below, or click here to learn more about the awards. If you’re interesting in submitting for next year’s awards, pre-register for .conf2016, which will be held September 26-29, 2016, at the Walt Disney World Swan Dolphin Resort in Orlando, Florida.

Hope to see you in Orlando!

Thanks,

Doug

Full List of Winners

 

Developers:

Simon Balz, LC Systems

Mika Borner, LC Systems

Ashok Sharma, QOS Technology

 

Enterprise:

Khalid Ali, Symantec

Steven Selk, Sony Network Entertainment International

Rick Sigle, Jump Operations

Jordan Weinstein, Stroock & Stroock & Lavan LLP

 

Innovation:

Dennis Berman, Capital One Services, LLC

Frank D’Arrigo, AAA Western and Central New York

Amanda Peck, The Walt Disney Company

Andrew Wurster, Atlassian Software Systems

 

Social Impact:

Tyler Menezes, StudentRND

 

Splunk Ninjas:

Joe Cramasta, Comcast

Charlie Huggard, Cerner Corporation

Maria McClelland, Oak Ridge National Labs


Drop your breaches: EMEA security sessions at .conf2015

$
0
0

Hi all,

Recently we had our annual user conference .conf2015 at the MGM in Las Vegas. We had many European customers join us there and some of them presented the impressive things they are doing with Splunk and their machine data. Earlier this week, Matt talked about the EMEA customers that presented their IT Operations use cases. I want to share with you how EMEA customers use Splunk for Security. Everything from traditional SIEM use cases, to security analytics with automated response, as well as protecting the business by using Splunk for fraud and forensics. Here are the highlights of this year from EMEA – you can review the slide decks and watch the recordings on our .conf2015 website.

Yoox.com: Building an Enterprise-Grade Security Intelligence Platform

yoox

Gianluca Gaias, Head of Information Security from Yoox Group, the global leader in online luxury brands (that recently acquired Richemont’s Net-a-porter), adopted Splunk as the integration fabric of their cybersecurity platform. Specifically, Splunk provides real-time event correlation and analytics to allow intrusion detection and identification of recurring malicious behavioural patterns. Any violations of security policies are detected by an automatic alerting system. These incidents are visible in a comprehensive set of dashboards that enriches activity monitoring with deep investigation capabilities. Yoox is currently working to build an enterprise grade security intelligence platform with predictive and learning capabilities based on their current Splunk deployment; with this achievement they will make a step forward from a reactive approach to a more mature, proactive one.

RecordingSlides

Swisscom: Collaborative Security Model

swisscom_fibre

Christof Jungo, Head of Security Architecture from Swisscom presented the new way they want to approach IT security in the near future. Recently they also published the report “Cyber Security: the current threat status and its development”.

The collaborative security model is a framework that extends Splunk’s existing monitoring solution with an open and expandable abstraction layer for security commands. The aim is to build a true ecosystem, which allows all security solution providers to participate by expanding the framework with their own application. The framework establishes a standardized two-way communication channel. This enables security components to be managed centrally. Another advantage is the abstraction layer. This ensures security providers can easily be replaced at any time with a new, more suitable product. In our joint efforts for phase 1, we brought a number of providers onboard, such as Intel, Fortinet, Palo Alto Networks and EMC. The goal is to build a prototype to further enable manufacturers to participate in the ecosystem.

Recording | Slides

Christof also gave an interview at theCUBE and explained their concept of “we are already breached”.

PostFinance: How Splunk Connects Business and IT at a Swiss Bank

post_finance

Patrick Hofmann, Head of IT Infrastructure showed how PostFinance, Switzerland’s third-largest retail bank, grew from using Splunk for log management to providing machine data-based services to a wide audience, including business applications. The session provided a short overview of the Splunk environment at PostFinance and then focused on two use cases:

  1. Business Support: The application support team has moved from using database exports and excel to create their monthly reports, to being able to recognize possible fraud cases and create any report a manager asks for on the fly.
  1. Fraud Detection: The online security team uses Splunk to survey the biggest online banking portal in Switzerland and to react in real time against threats or possible attacks. To end the session, he took a quick look at the key success factors for implementing Splunk at PostFinance.

RecordingSlides

Finanz Informatik: Compliance for 124 Million Bank Accounts

Sparkassen_online-banking

Dirk Hille, Michael Grabow and Julian Teichart from Finanz Informatik (FI), the IT service provider for approximately 416 German saving banks, with up to 124 million bank accounts, explained their journey with Splunk. Finanz Informatik uses Splunk to comply with both internal requirements and external regulations to control the access to customer data. They showed how Finanz Informatik started with Splunk to build a centralized SIEM platform across the mainframe, network, Unix and Windows environments. They then gave an overview of how Finanz Informatik uses Splunk for compliance requirements. This session covered how Finanz Informatik designed the architecture, the challenges they faced and the solutions they had implemented. They presented their monitoring, automated deployment and release management for Splunk in a complex, heterogeneous IT environment as well.

RecordingSlides

 Linux Polska: From Zero to Pretty Robust Fraud Detection Tool

linux_polska

Tomasz Dziedzic, Senior Service Architect at Linux Polska showed how one for their customers (a large financial bank) started reporting cases of wire transfers not being delivered. This had the result of their clients threatening them with lawsuits and the bank started to lose its reputation. The anti-fraud team was helpless. The security analysts found some suspicious event sequences in custom application and web servers logs, which indicated that someone had stolen clients passwords. An attempt to solve the problem of automated fraud detection with old school Unix tools as egrep, sed, awk, cron, led to a quick-and-dirty, temporary, partial solution that nobody was fully satisfied with. The anti-fraud team still needed a solid and flexible tool to provide support for fraud detection. Tomas presented the main Fraud Detection tool features which Linux Polska built and demonstrated how they utilized Splunk to quickly build such a tool.

Slides

 

If you want to find out more, see how you can protect yourself from modern security attacks and “drop your breaches” then visit our SplunkLives or join us next year for .conf2016 in Orlando, FL.

Looking forward to seeing you soon!

Br

Matthias

Planes, Trains, Automobiles (and Shopping). European Business Analytics at .conf2015

$
0
0

PlanesTrainsAutomobiles

 

So far in this blog series wrapping up .conf2015 from an EMEA perspective, we’ve explained how to bring sexy back to IT Ops whilst dropping your security breaches.

 

We wanted to wrap up with some of those exciting analytics use cases outside of IT Ops and Security. EMEA had some great customers talking about their use of Splunk for business analytics and we had case studies of planes, trains and automobiles (and very large omni-channel retailers).

 

 

 

As we’re increasingly seeing here at Splunk, one of the secrets to getting value from your data is to collect it once and use it for multiple purposes. Analytics plays a key part in enabling everyone inside a company to get value, even if they have different needs or questions to ask the data. The EMEA organisations at .conf this year certainly showed the art of what is possible in the areas of customer experience and IoT analytics.

The .conf opening keynote started with talking about why real-time analytics is so important when getting the insight to run your company – “why would you make business decisions on last year’s data” was the opening question. Splunk’s CTO Snehal Antani then went on to explain Splunk’s strength in business analytics:

AnalyticsSplunk

We then got to the first of two EMEA customers using Splunk for business analytics, Otto Group from Germany. Otto are one of the top multi-channel B2C retailers in Germany, second only behind Amazon. They started using Splunk for IT Operations and monitoring transactions but this has now evolved into customer experience and business process analytics. Otto are now getting real-time insight into order volume, value, completed and failed purchases. These real time analytics in Splunk are coming from over 60 different sources of machine data. You can see their story at 44:50 in the keynote video recording below.

Their presentation can be found here. The keynote then moved on to IoT and Splunk’s role in managing and getting the value from industrial, sensor and device data.

SplunkIOT

We were very lucky to have Robotron talk about their use of Splunk in the automotive manufacturing/production process and the role that plays in the Industry 4.0 initiative driven out of Germany. They gave a great presentation on how to use Splunk to drive efficiency in the manufacturing process by making the most of sensor data. This follows on from another example of monitoring, diagnostics and preventative maintenance in the work we did recently with Deutsche Bahn and their IoT Hackathon

Gatwick Airport is the last customer I wanted to mention who are using Splunk for cloud based, predictive airport analytics from machine data. Gatwick spoke about how they have 925 flights a day at peak times and how passenger experience is key. They discussed how they are monitoring travel disruption, passenger flow, social media, airport gate data, boarding card scans and X-ray data to ensure the business is performing. They talked about how they have reduced queueing times and improving on-time efficiency on aircraft and have real-time airfield dashboards. A video that Gatwick created that shows how the airport function gives a great insight into what Operational Intelligence means to them.

There is a great article about how they use Splunk over at V3.co.uk. My favourite quote and a great of example of what we mean we talk about Operational Intelligence is:

This is why Gatwick airport is aiming at using Splunk’s operational analytics cloud service to predict how numerous events, incidents and factors will affect its ability to work at peak performance.

Speaking at Splunk’s .conf annual conference held in Las Vegas, Joe Hardstaff, business systems architect at Gatwick airport, explained the organisation is building out how it uses Splunk to predict the performance of its operations four hours in advance by linking multiple data sources together.

“We’re starting to move more into the predictive side of things,” he said.
“If there is disruption, we can try to man up the airport so we can get people through the airport as quickly as possible and still get them on their flights.
“So when we’ve got times of crisis or major incidents, we can predict how we are going to be operating in four hours’ time and whether we are actually able to, through the action that we are taking, reduce that timeframe to stay operational.”

Their presentation from .conf is now available if you want to find out more.

Thanks on behalf of Matthias and I for reading the EMEA roundup of .conf2015. Hopefully see you there next year (in Orlando) or at a local SplunkLive somewhere in EMEA in 2016.

Data Integrity is back, baby!

$
0
0

I’m sitting in my living room near Boulder, and watching the Republican Presidential Debate happening right down the road at the University of Colorado. Each candidate is doing their best to portray themselves as a candidate with integrity that’s ready to lead our country into the future. But this far into the debate, the responses are getting pretty repetitive…

So it’s a perfect time to check out something with some real integrity – the new Data Integrity feature added to Splunk 6.3, now generally available from Splunk. This allows you to prove that your indexed data has not been tampered with after indexing. Some historical background…we used to have two features that were similar, one called Block Signing and the other called Event Hashing. However, the former didn’t work with distributed search, and the latter didn’t work with index replication, so in practice these were inappropriate to implement because most Splunk installations are configured with distributed search, index replication, or both.

The new Data Integrity feature works with both distributed search and clustered configurations. It’s particularly important if you need to prove that your ingested Splunk data has not been tampered with after indexing – think of compliance regulations like PCI 10.5.5. You turn it on at the individual index level, and in this release it can only be enabled via CLI and editing the indexes.conf file. Also note, if you enable it on an index that already has data in it, the data already in the index will fail the integrity check because the hashes calculated for the integrity check are done at index time. So probably best to do this on a new index within which you need to guarantee the integrity.

Here’s a quick walkthrough. I’ve created a simple index called pci_data in my local copy of Splunk:

Manage_Indexes___Splunk_6_3_0

Then, I go to my indexes.conf, and add the directive “enableDataIntegrityControl = true” to the indexes.conf file where the index is defined:

local_—_vim_—_80×24

Then I add some data to the index, and if you look at the hot bucket where the data gets indexed, you will see an “l1Hashes” temp file get created, and it gets updated with SHA256 hashes calculated on the slices of data (128kb in size, which is configurable) in the index, as new data gets indexed into the hot bucket:

rawdata_—_bash_—_80×24

Once the hot bucket rolls to warm, the .tmp file gets finalized, and a L2Hash file gets created which contains a hash of the l1Hashes file (because warm buckets should not change their contents as they are read-only):

rawdata_—_bash_—_80×24

To check and see if your index has integrity, you can run the check-integrity command, which compares the hash data in the l1Hashes file with the L2Hash file, and then with the hashes of the rawdata slices in the index, and lets you know about any discrepancies:

bin_—_bash_—_80×24

Obviously, indexes with a lot of data take a while to verify, but the verification process happens outside of splunkd so as to not affect indexing performance. You can back up the hash files somewhere else to prevent them from being tampered, and bring them back in for the verification process (this would need to be scripted). Also, the slice size that the hashes are computed against can be configured.

To prove to an auditor that you can integrity check your data, show the places in indexes.conf where you have configured the feature, and demonstrate that you can run integrity checks as needed. You could even script regular integrity checks and alert if they indicate tampering.

For more info, check out our official documentation here. And for a whole lot more detail, have a look at the slides and the recording from Dhurva Bhagi’s presentation on this new feature at .conf 2015. Dhurva’s presentation contains details on how this works in clustered environments, and what kind of performance hits you might take and disk space you need (sneak preview: both are negligible.)

Stay untampered, my friends.

 

.conf2015 Highlight Series: Gatwick Airport Looks up to the Cloud

$
0
0

Screen Shot 2015-11-13 at 9.45.12 AMAt Splunk .conf2015, Joe Hardstaff, Business Systems Architect at Gatwick Airport, spoke about the challenges his organization faced as an airport, trying to compete with other local airports with more runways. To give us background on the size of Gatwick Airport, he shared the following stats (you can share them too):

  • Gatwick is the busiest single-runway airport in the world hosting 925 flights per day
  • By 2016, the airport will have serviced 40 million passengers
  • 52 airlines flying to 200 locations in 90 countries (more destinations than any other UK airport)
  • Hardstaff explained that to set themselves apart, his colleagues developed an on-time efficiency solution for Gatwick to allow for an increased number of slots/flights per hour.

    However, the problem Gatwick still faced was IT architecture monitor processes, specifically:

    Screen Shot 2015-11-13 at 9.43.41 AM

  • Radar – Zoned, Finals, Landed
  • Flight Information Displays
  • Resource on Stand
  • Stand Entry Guidance System
  • Fixed Electrical Ground Power
  • Steps & Air-bridge Attached
  • Service Vehicles Geo Tag & Fence
  • Baggage Reconciliation System
  • People Counting System
  • Electronic Flight Progress Strips
  • Airport Operational Database – Flight Status
  • Gatwick implemented Splunk Cloud in July 2014. In doing so, Hardstaff’s team realized that combining ops data in Splunk Cloud gave them the agility and scalability they needed while providing insight into airport performance.

    Screen Shot 2015-11-13 at 9.44.17 AM

    You can see how Gatwick used Splunk Cloud to increase efficiencies and dependability in the IT architecture and processes in the recording and slide presentation below.

    Screen Shot 2015-11-13 at 9.45.36 AM

    For the full recording, check out:
    Driving Efficiency With Splunk Cloud at the Busiest Single Runway Airport in the World.

    All presentations from Splunk.conf2015.

    Related Reads:
    TechWeekEurope UK: How Splunk Is Helping Gatwick Airport Keep Up The Heathrow Rivalry
    V3.co.uk: Gatwick’s IT future will take off with cloud-powered predictive analytics

    Splunk Archive Bucket Reader and Hive

    $
    0
    0

    This year was my first .conf, and it was an amazingly fun experience! During the keynote, we announced a number of new Hunk features, one of which was the Splunk Archive Bucket Reader. This tool allows you to read Splunk raw data journal files using any Hadoop application that allows the user to configure which InputFormat implementation is used. In particular, if you are using Hunk archiving to copy your indexes onto HDFS, you can now query and analyze the archived data from those indexes using whatever your organization’s favorite Hadoop applications are (e.g. Hive, Pig, Spark). This will hopefully be the first of a series of posts showing in detail how to integrate with these systems. This post is going to cover some general information about using Archive Bucket Reader, and then will discuss how to use it with Hive.

    Getting to Know the Splunk Archive Bucket Reader

    The Archive Bucket Reader is packaged as a Splunk app, and is available for free here.

    It provides implementations of Hadoop classes that read Splunk raw data journal files, and make the data available to Hadoop jobs. In particular, it implements an InputFormat and a RecordReader. These will make available any index-time fields contained in a journal file. This usually includes, at a minimum, the original raw text of the event, the host, source, and sourcetype fields, the event timestamp, and the time the event was indexed. It cannot make available search-time fields, as these are not kept in the journal file. More details are available in the online documentation.

    Now let’s get started. If you haven’t already, install the app from the link above. If your Hunk user does not have adequate permissions, you may need the assistance of a Hunk administrator for that step.

    Log onto Hunk, and look at your home screen. You should see a “Bucket Reader” icon on the left side of the screen. Click on this. You should see a page of documentation, like this:

    hive-pic1

    Take some time and look around this page. There is lots of good information, including how to configure Archive Bucket Reader to get the fields you want.

    Click on the Downloads tab at the top of the page. You should see the following:

    hive-pic2

    There are two links for downloading the jar file you will need. If you are using a Hadoop version of 2.0 or greater (including any version of Yarn), click the second link. Otherwise, click the first link. Either way, your browser will begin downloading the corresponding jar to your computer.

    Using Hive with Splunk Archive Bucket Reader

    We’ll assume that you already have a working Hive installation. If not, you can find more information about installing and configuring Hive here.

    We need to take the jar we downloaded in the last section, and make it available to Hive. It needs to be available both to the local client, and on the Hadoop cluster where our commands will be executed. The easiest way to do this is to use the “auxpath” argument when starting Hive, with the path to the jar file. For example:

    hive --auxpath /home/hive/splunk-bucket-reader-2.0beta.jar

    If you forget this step, you may get class-not-found errors in the following steps. Now let’s create a Hive table backed by a jounal.gz file. Enter the following into your Hive command-line:

    CREATE EXTERNAL TABLE splunk_event_table (
        Time DATE,
        Host STRING,
        Source STRING,
        date_wday STRING,
        date_mday INT
    )
    ROW FORMAT SERDE 'com.splunk.journal.hive.JournalSerDe'
    WITH SERDEPROPERTIES (
        "com.splunk.journal.hadoop.value_format" = 
            "_time,host,source,date_wday,date_mday"
    )
    STORED AS INPUTFORMAT 'com.splunk.journal.hadoop.mapred.JournalInputFormat'
    OUTPUTFORMAT 'org.apache.hadoop.hive.ql.io.IgnoreKeyTextOutputFormat'
    LOCATION '/user/hive/user_data’;

    If this was successful, you should see something like this:

    OK
    Time taken: 0.595 seconds

    Let’s look at a few features of this “create table” statement.

    • First of all, note the EXTERNAL keyword in the first line, and the LOCATION keyword in the last line. EXTERNAL tells Hive that we want to leave any data files listed in the LOCATION clause in place, and read them when necessary to complete queries. This assumes that /user/hive/user_data contains only journal files. If you want Hive to maintain it’s own copy of the data, drop the EXTERNAL keyword, and drop the LOCATION clause at the end. Once the table has been created, use a LOAD DATA statement.
    • The line
      STORED AS INPUTFORMAT 'com.splunk.journal.hadoop.mapred.JournalInputFormat'

      tells Hive that we want to use the JournalInputFormat class to read the data files. This class is located in the jar file that we told Hive about when we started the command-line. Note the use of “mapred” instead of “mapreduce”—Hive requires “old-style” Hadoop InputFormat classes, instead of new-style. Both are available in the jar.

    • These lines:
      ROW FORMAT SERDE 'com.splunk.journal.hive.JournalSerDe'
      WITH SERDEPROPERTIES (
          "com.splunk.journal.hadoop.value_format" = 
              "_time,host,source,date_wday,date_mday"
      )

      tell Hive with fields we want to pull from the journal files to use in the table. See the app documentation for more detail about which fields are available. Note that we are invoking another class from the Archive Bucket Reader jar, JournalSerDe. “SerDe” stands for serializer-deserializer.

    • This section:
      (Time DATE,
      Host STRING,
      Source STRING,
      date_wday STRING,
      date_mday INT)

      tells Hive how we want the columns to be presented to the user. Note that there are the same number of columns here as in the SERDEPROPERTIES clause. This section could be left out altogether, in which case each field would be treated as a string, and would have the name it has in the journal file, e.g. _time as a string instead of Time as a date.

    Now that you have a Hive table backed by a Splunk journal file, let’s practice using it. Try the following queries:

    select * from splunk_event_table limit 10;
    select count(*) from splunk_event_table group by host;
    select min(time) from splunk_event_table;

    Hopefully that’s enough to get you started. Happy analyzing!

    Using Splunk Archive Bucket Reader with Pig

    $
    0
    0

    This is part II in a series of posts about how to use the Splunk Archive Bucket Reader. For information about installing the app and using it to obtain jar files, please see the first post in this series.

    In this post I want to show how to use Pig to read archived Splunk data. Unlike Hive, Pig cannot be directly configured to use InputFormat classes. However, Pig provides a Java interface—LoadFunc—that makes it reasonably easy to use an arbitrary InputFormat with just a small amount of Java code. A LoadFunc is provided with Splunk Archive Bucket Reader: com.splunk.journal.hive.JournalLoadFunc. If you would prefer to write your own, you can find more information here.

    Whereas Hive closely resembles a relational database, Pig is more like a high-level imperative language for creating Hadoop jobs. You tell Pig how to make data “relations” from data, and from other relations.

    In the following, we’ll assume you already have Pig installed and configured to point to your Hadoop cluster, and that you know how to start an interactive session. If not, you can find more information here.

    Here is an example Pig session. The language used is called Pig Latin.

    REGISTER splunk-bucket-reader-1.1.h2.jar;
    A = LOAD 'journal.gz' USING com.splunk.journal.pig.JournalLoadFunc('host', 'source', '_time') AS (host:chararray, source:chararray, time:long);
    B = GROUP A BY host;
    C = FOREACH B GENERATE group, COUNT(A);
    dump C;

    Let’s look at these statements in more detail.

    • First:
      REGISTER splunk-bucket-reader-1.1.h2.jar;

      This statement tells Pig where to find the jar file containing the Splunk-specific classes.

    • Next:
      A = LOAD 'journal.gz' USING com.splunk.journal.pig.JournalLoadFunc('host', 'source', '_time') AS (host:chararray, source:chararray, time:long);

      This statement creates a relations called “A” that contains data loaded from the file ‘journal.gz’ in the user’s HDFS home directory. The expression “(‘host’, ‘source’, ‘_time’)” determines which fields will be loaded from the file. The expression “AS (host:chararray, source:chararray, time:long)” determines what they will be named in this session, and what data types they should be assigned.

    • Next:
      B = GROUP A BY host;
      C = FOREACH B GENERATE group, COUNT(A);

      These statements say that we want to group events (or in Pig-speak, tuples) together based on the “host” field, and then count how many tuples each host has.

    • Finally:
      dump C;

      This tells Pig that we want the results printed to the screen.

    I ran these commands on a journal file containing data from the “Buttercup Games” tutorial, which you can download from here. They produced these results:

    (host::www1,24221)
    (host::www2,22595)
    (host::www3,22975)
    (host::mailsv,9829)
    (host::vendor_sales,30244)

    Viola! Now you can use Pig with archived Splunk data.

    .conf2015 Highlight Series: On track for savings and performance… Aurizon rolls out Splunk Cloud

    $
    0
    0

    During .conf2015 we were pleased to play host to a session about one company’s transition to Splunk Cloud. Read on to learn more, but check the session recording for more details — and be sure to grab a copy of the presentation itself for reference.

    AurizonMoving more than 250 million tons of commodities, Aurizon is one of the largest rail freight operators in Australia. Şebnem Kürklü, an information security manager, joined the company with a focus on improving IT security, vendor and service provider relationships, increase risk awareness in business units, and to leverage investment in current technologies. A full plate for anyone.

    The Aurizon IT landscape
    Aurizon outsources much of its IT to Fujitsu, though it maintains functions such as architecture and design, security, governance, and project delivery internally. That said, soon after joining the company Şebnem discovered that she had little visibility into the network and the overall environment.

    Fortunately Aurizon had a pre-existing on-prem Splunk deployment that already had an enterprise security app monitoring malware events, performance of some directory servers, privileged access changes, code of conduct breaches, and internal application errors. However, it was only licensed for 20GB of data, there was not an internal support team assigned to make the most use of it, nor were there internal compute resources in place.

    After evaluating Splunk and determining it was the right tool for the job they needed, Şebnem and her team determined that a 100GB license was ideal. They evaluated both physical and virtual deployments and determined that it would take a great deal of time, effort and resources to move to one of these deployments.

    That’s when they realized they could run Splunk in the cloud. They could enjoy the functionality and performance of an on-prem solution without the management and maintenance costs.
    Screen Shot 2015-12-15 at 11.42.33 AM

     

    Making the case

    Şebnem made a case to her management team for budget by citing:

    • Reduced monthly operating costs (while improving performance)
    • 100% availability without creating a full DR replica of the system
    • Reduced system administration and maintenance tasks
    • Operation intelligence could be advanced
    • Indexing capacity could be increased with additional licenses but without platform changes
    • More data could be retained without increasing operating costs

    Screen Shot 2015-12-15 at 11.42.56 AM

    Now that you know why Aurizon chose Splunk Cloud, learn more about how they rolled it out and configured it by watching the presentation recording and checking out the presentation itself:

    Screen Shot 2015-12-15 at 11.50.10 AM

    All presentations from Splunk.conf2015.

    Save the date and RESERVE YOUR SPOT for .conf2016:
    Sept 26-29, 2016 | Walt Disney World Swan and Dolphin Resort


    .conf2015 Highlight Series: Splunk Cloud Keeps Orion Talking

    $
    0
    0

    At .conf2015, Orion Labs’ Dan Phung showed how his company brings together the cloud, wearable technology, and the Internet of Things with Splunk. We take a look at what he shared during .conf below, but feel free to check out the session recording and his presentation slides for even more detail. And don’t miss the video overview below too.

    Science fiction is the stuff of dreamers, but these dreams sometimes come true. Author Arthur C. Clarke envisioned using geostationary satellites for telecommunications relays. Edward Bellamy, in 1888, envisioned the concept of credit cards. Even Aldous Huxley, back in 1931, envisioned a pill that could make unhappy people happy. Crazy stuff!

    With that in mind, we couldn’t help but think of the possibilities that Orion Labs’ Onyx communication device could bring. Onyx is a simple, wearable device that allows the wearer to record a message which is distributed via their cell phone to an Orion server and then on to a pre-defined group of contacts in real-time. This brings the cloud, wearable technology, and the Internet of Things together. Sounds simple, but this is so much more than what your childhood walkie-talkie was capable of.

    A company of 35 people, Orion relies on third party tools and services in order to stay lean and responsive. That means using cloud based microservices such as Github and Pagerduty. And to simplify the management of those systems, hardware and software, Orion uses Splunk Cloud.

    Screen Shot 2015-12-21 at 10.15.48 AM

    Cloud, meet wearable technology

    As soon as you activate the Onyx your voice begins streaming to endpoints — your group or team. Your voice — data — goes to Orion’s servers along with a host of other information from the Onyx itself. Information ranging from battery status to any errors encountered. Orion uses this operational data to make informed, data-driven business decisions.

    To make these decisions, Orion uses Splunk Cloud.

    Analyzing information from application servers, third party tool data, and device data, Splunk provides a visualization in the form of graphs that indicate server load balance, quality, users per minute, messages sent, device battery status, latency and much more. Orion also uses Splunk to create alerts, debug applications in real time to find errors before they show up in production, and to monitor the production environment.

    Learn more here:

    All presentations from Splunk.conf2015.

    Save the date and RESERVE YOUR SPOT for .conf2016:
    Sept 26-29, 2016 | Walt Disney World Swan and Dolphin Resort

    .conf2015 Highlight Series: EnerNOC uses Splunk to Get a Grip on Power

    $
    0
    0

    This post is inspired by our most recent announcement with EnerNOC, but read on for more details.

    From cruising altitude, the modern energy industry seems like an island of calm. But as your metaphorical jet gets closer to land, the messiness begins to unfold around you. Be it government regulation, evolving technology, spikes in fear relating to nuclear energy, or even the ability to harness solar or wind power to put energy back into the grid and, gosh, get paid by the power company, there’s a dizzying amount of complexity behind every power bill that increases or decreases your price per kilowatt hour. And that complexity affects your bottom line.

    So, in this chaotic world, how can you know you’re getting the best price for your power?

    Power held in check

    That’s where Boston-based Splunk customer EnerNOC comes in with their Energy Intelligence Software (EIS) for enterprises and utilities. Among its benefits, the EIS solution helps companies:

    • Get the best price for energy
    • Streamline compliance with regulations such as ENERGY STAR, GRESB, CDP or ISO 50001
    • Reduce time spent tracking accruals, budgets, and forecast while also improving accuracy
    • Evaluate the efficiency of different buildings, plants, production lines, and the teams responsible for them

    To manage the wealth of data EIS tracks, EnerNOC developed a homegrown solution for data analysis which was not sufficiently scalable. This would present a challenge for customers with large or complex systems to manage.

    Information is power

    That’s where Splunk comes in. EnerNOC uses Splunk Enterprise on the Amazon Web Services (AWS) cloud to manage data from numerous sources including application server logs, Apache logs, custom application logs, and much more. Key benefits from Splunk include:

    • Gives users the real-time operational visibility into data and metrics they need and expect with high error-free throughput and near zero latency
    • Accelerated application development and testing
    • Improved DevOps collaboration

    You can learn much more about EnerNOC’s challenge and solution by reading this case study. Interested in going even further down the rabbit hole? Check out the EnerNOC breakout session from our .conf2015 session, featuring a recording and slide deck:

    Screen Shot 2016-01-04 at 5.51.27 PM

    All presentations from Splunk.conf2015.

    Save the date and RESERVE YOUR SPOT for .conf2016:
    Sept 26-29, 2016 | Walt Disney World Swan and Dolphin Resort

    .conf2015 Highlight Series: Tracking Business Transactions with Splunk – Northern Trust Bank

    $
    0
    0

    Continuing with our theme around Business Process Analytics, this blog highlights how Northern Trust Bank leverages Splunk to gain an end-to-end view of their financial transactions. They presented at .conf 2015 and you can listen to their amazing story here or download their presentation by clicking the title slide below:

    Screen Shot 2016-01-08 at 11.26.25 AM

    Headquartered in Chicago, Northern Trust Bank is a “Bank for the Banks”. With over $120 billion in banking assets, $6 trillion in assets under custody, and $887 billion in assets under management, the bulk of its business is to provide services to other banks and institutional clients. While they have a retail presence it is minor subset of their business.

    As a result, most of their transactions are of high value, in addition to also processing a high volume of transactions. With a large customer base, these transactions can be initiated by thousands of different systems outside the bank. These systems can be based on off-the-shelf vendor software or built organically by various institutions. Once a transaction is initiated, it goes through multiple different processes or steps within the bank. For example, a cash transaction might go through a validation step, fraud detection step, then liquidity checks etc. Each of these steps is processed by a different application or a system. These in turn are also often organically built by the bank or are based on off-the-shelf vendor software.

    Complexity_Transaction

    To add to the complexity of thousands of systems, as transactions flow through these systems, the structure of the data changes or the metadata of the transaction changes. For example, accountID in one system might be represented as accountNum in another or transactionID in one might be represented as documentID in another.

    Hopefully the picture of complexity is clear ☺ – thousands of systems, having their own knowledge of the data, being conduits of high volume of high value transactions. In this complex landscape it was challenging for Northern Trust bank to get an end-to-end view of the process flow. This is where Splunk came in!

    Leveraging Splunk’s open platform, Northern Trust Bank was able to leverage its APIs to create a custom UI for its operations team, enabling them to view the entire lifecycle of a transaction and also provide them with a view of the system health.

    NorthernTrust

    The operations team can now simply enter a Google like search criteria, for example, amount of a transaction, client name, date etc. and see where the transaction is in its lifecycle. They can also click on a transaction to understand issues if any.

    In addition to being able to locate a transaction and gain an end-to-end view of the process flow, Northern Trust Bank is also able to perform analytics around transactions, measure the velocity of the transactions, and detect any abnormal events.

    I can view the entire lifecycle of a transaction and gain an end-to-end view, so what?

    Operating in a highly regulated environment, Northern Trust Bank has been able to improve their STP or Straight Through Processing (STP) through this end-to-end process view. STP is an initiative in the financial world to optimize the speed at which transactions are processed. Faster STP reduces settlement risk by improving the probability that a contract is settlement in time.

    In addition, the bank is able to avoid/reduce financial penalties by reacting quickly.

    Lastly, Northern Trust Bank is able to deliver a superior customer experience by being able to answer their status questions quickly.

    Whether you are a financial institution looking to settle transactions or a retailer looking to gain end-to-end visibility into the order management process, the underlying challenges are similar and the upside is tremendous!

    Splunk enables organizations to gain a data driven view of their business processes in real-time. Stitching together data from multiple different application silos, organizations are able to gain end-to-end visibility into complex business processes by correlating data across these data sources, and also rapidly on-board new data sources as the underlying process changes.

    Stay tuned for more awesome stories from our amazing customers!

    Happy Splunking!

    Manish Jiandani
    Director, Solutions Marketing
    Splunk Inc.

    Save the date and RESERVE YOUR SPOT for .conf2016:
    Sept 26-29, 2016 | Walt Disney World Swan and Dolphin Resort

    What’s next? Next-level Splunk sysadmin tasks, part 3

    $
    0
    0

    splunktrust(Hi all–welcome to the latest installment in the series of technical blog posts from members of the SplunkTrust, our Community MVP program. We’re very proud to have such a fantastic group of community MVPs, and are excited to see what you’ll do with what you learn from them over the coming months and years.
    –rachel perkins, Sr. Director, Splunk Community)


    This is part 3 of a series.
    Find part 1 here: http://blogs.splunk.com/2016/02/11/whats-next-next-level-splunk-sysadmin-tasks-part-1/.
    Find part 2 here: http://blogs.splunk.com/2016/02/16/whats-next-next-level-splunk-sysadmin-tasks-part-2/

    Hi, I’m Mark Runals, Lead Security Engineer at The Ohio State University, and member of the SplunkTrust.

    There can be numerous challenges involved with ingesting data into your local Splunk environment. Because Splunk works so well out of the box against so many types and formats of data, it can be easy to overlook the complexity of what is happening behind the scenes.

    So far in this series we’ve talked about ways to validate some of the basic assumptions people have as they search in and look at data in Splunk – these events happened on that server at this time. In retrospect I should have used that line at the beginning of this series. In part 1 I talked through a way to make sure the values in the host field are correct, and in part 2 that the local server time is set correctly. Time issues with your data go beyond simply making sure local server clocks are right. However, getting that set correctly is like buttoning your shirt with the correct first button and hole. Once that is addressed, the next step is to identify cases where there is an extreme or significant gap between when the data was generated and when it comes into Splunk. This is part art, part science. At a base level, the ‘science’ is pretty easy  – subtract _indextime from _time. The art is masking your ire when you talk to system administrators about how they haven’t been managing their systems correctly! I kid, I kid. Actually the art is trying to identify which systems or data sources are having issues time or other data ingestion issues, if the cause is server or Splunk related, and where to apply a fix.

    The two categories of time issues

    I tend to lump time issues into two categories: availability and integrity.

    Let’s say you have an alert set up to run every 15 minutes looking at the last 15 minutes’ worth of logs from a particular sourcetype – only it takes 20 minutes or more for the data to come in. The data will eventually be placed in its chronologically correct position but your alert will never fire. Availability.

    Conversely, let’s say you are investigating an outage or security issue that happened at a particular time, only one of the data types is generated in a different, and unaccounted for, time zone compared to the rest of the data – you will likely miss related events. Integrity.

    Solutions and resources

    There are more issues and possible solutions in this area than I could possibly cover in one or even several blog posts. As a quick start let’s look at some Splunk configuration things to do/look for. The first is the fact that forwarders are set by default to send only 256 kbps of data.  A server generating more data than the forwarder can push is one reason you might see a delay in data being ingested. This can be found with a query like the following:

    index=_internal sourcetype=splunkd "current data throughput" | rex "Current data throughput \((?<kb>\S+)" | eval rate=case(kb < 500, "256", kb > 499 AND kb < 520, "512", kb > 520 AND kb < 770 ,"768", kb>771 AND kb<1210, "1024", 1=1, "Other") | stats count sparkline by host, rate | where count > 4 | sort -rate,-count

    If a forwarder has just been restarted it will likely have to catch up, which is why the query has its where statement. The sparkline output looks funky in email form, but my team has this query run to cover a midnight-to-midnight stretch. The number’s placement can give insight into when the limit was hit and what might be happening – ie, busy server that was rebooted or consistently hitting the limit and we need to up the throughput limits on the forwarder. That can be adjusted via the forwarder’s limits.conf > [thruput] maxKBps = whatever. There is a dashboard related to this and other forwarder issues on the Forwarder Health Splunk app.

    There is some anecdotal evidence that some of the newest forwarders might not be generating this internal message or the conditions for the event’s generation has changed which hopefully isn’t the case(!!). I have an open case with Splunk looking into this; will have this post updated if something is determined one way or the other.

    The next thing to do is update your props time related settings for your sourcetypes, especially TIME_FORMAT. This and related settings will make sure Splunk is understanding the timestamp correctly. This subtopic alone can be long and involved. While I hate to hawk my own crap, the OSU team has had to do a lot of work in this space (work months and ongoing /shudder) so I’ll refer you to the Data Curator app. I’m not sure if Splunk dropping events or recognizing timestamps incorrectly is worse but either way if events aren’t where you expect them its bad. The following is one of the queries in the Data Curator app that looks for dropped events due to timestamp issues; you could run it over the last 7 days or so:

    index=_internal sourcetype=splunkd DateParserVerbose "too far away from the previous event's time" OR "outside of the acceptable time window" | rex "source::(?<Source>[^\|]+)\|host::(?<Host>[^\|]+)\|(?<Sourcetype>[^\|]+)" |rex "(?<msgs_suppressed>\d+) similar messages suppressed." | eval msgs_suppressed = if(isnull(msgs_suppressed), 1, msgs_suppressed) | timechart sum(msgs_suppressed) by Sourcetype span=1d usenull=f

    Besides the Data Curator app, I recommend other Splunk resources like Andrew Duca’s Data Onboarding presentation from .conf15.

    So now let’s generically say Splunk is configured to recognize your timestamp formats correctly and the forwarders are able to send data just as quickly as their little digital hearts are able to pump it out. As mentioned above, we need to look at the _indextime field. To find the delta, or if there is ‘lag’, between event generation and ingestion, you simply subtract one from the other via a basic eval ( | eval index_time = _indextime – _time). If you want to see what that index time time actually is, you’d need to create a field to operate as a surrogate like | eval index_time = _indextime and then maybe a | convert ctime(index_time) unless you are a Matrix-like prodigy who can convert the epoch time number into a meaningful date in your head.

    A basic and fairly generic query to review your data on the whole might be something like this, though it tends toward looking for time zone issues. If you want to have it look a bit broader at probable time issues, adjust the hrs eval to round(delay/3600,1) or just remove the search command right after that eval.

    index=* | eval indexed_time= _indextime | eval delay=_indextime-_time  | eval hrs = round(delay/3600) | search hrs > 0 OR hrs < 0 | rex field=source "(?<path>\S[^.]+)" | eval when = case (delay < 0, "Future", delay > 0, "Past", 1=1, “fixme”) | stats avg(delay) as avgDelaySec avg(hrs) max(hrs) min(hrs) by sourcetype index host path when | eval avgDelaySec = round(avgDelaySec, 1)

    One thing I’m trying to do with the rex command in this query is cut out cases where the date field is appended to the data in an effort to cut down on granular noise. Note that this will take some time to churn through depending on your environment so I recommend a relatively small time slice of not more than 5 minutes or so. In reviewing this post, fellow Community Trustee Martin Müller pointed out that a query using tstats would be more efficient. I would agree, though I feel what we are talking about is chapter 3 or 4 material and tstats is like chapter 6 :). At any rate, a rough query he threw together is:

    | tstats max(_indextime) as max_index min(_indextime) as min_index where index=* by _time span=1s index host sourcetype source | eval later = max_index - _time | eval sooner = min_index - _time | where later > 60 OR sooner > 10

    An additional tip if you are on the North/South America side of the planet and you want a quick way to look for unaccounted for UTC logs, you could do the following. This will show logs coming in from the ‘future’:

    index=* earliest=+1m latest=+24h 

    When it comes to investigating especially an overloaded forwarder or sourcetype a quick go to for me is:

    host=foo source=bar (and/or sourcetype) | | eval delta = _indextime - _time | timechart avg(delta) p95(delta) max(delta)

    What I’m looking for here are basic visual trends like: is there a constant delay or does the delay subside at night/during slow periods?

    Overall, time issues can be somewhat troublesome to find and ultimately fix. This might involve adjusting the limits on forwarders as we’ve talked about, adding additional forwarders on a server to split up the monitoring load – ie, busy centralized syslog server, updating props settings, or having conversations with device/server admins.  I’ve not worked in a Splunk environment that collects data from multiple time zones. If you do and care to share some of the strategies you’ve used to work through those particular challenges please share in the comments!
    Hopefully you’ve found this series useful. It has been fun to write and share!

    .conf2015 Highlight Series: City of LA and Splunk Cloud as a SIEM for Award-Winning Cybersecurity Collaboration

    $
    0
    0

    Registration and call for papers is now open for Splunk .conf2016. We can’t wait to host you all at the Walt Disney World Swan and Dolphin Resorts in Orlando, Florida; September 26-29, 2016.
     
     
    LACitySealColorDuring last year’s Splunk .conf2015 we were lucky to have Timothy Lee, the CISO of the City of Los Angeles, share his case study for why his department chose Splunk Cloud as a SIEM for one of their cybersecurity initiatives and how it is used. Though we’re summarizing his key points in this post, you can get the complete picture by checking out a recording of Tim’s presentation, and access to his slides, at the bottom of this post.

    Screen Shot 2015-11-20 at 10.04.33 AM

    The Scenario

    Tim began by laying out the situation, but prefaced the presentation by saying “If your security team is still debating if you need SIEM, you’ve got a bigger problem.” Los Angeles is a city with 4 million people. The 2nd largest city in the US, employing 35,000 full time employees using 100,000 connected devices — or event generators. When the mayor issued a directive to address a number of cyber threats — which included the need to identify and investigate threats and intrusions, disseminate alerts, and coordinate incident responses across the city — Tim’s team had to get their act together. Unfortunately, his team faced quite a few challenges before rolling out Splunk — here’s just a few:

    • They were understaffed
    • Dealt with dispersed log capturing capabilities
    • Made little use of collaboration tools
    • Lacked an incident management platform
    • Had no threat intelligence program
    • Had limited situational awareness and operational metrics for the entire city

    The Solution

    To tackle these challenges, Tim and his team opted to create an integrated security and operations center using Splunk Cloud and Splunk Enterprise Security. Splunk Cloud, for example, provided the ability to manage and process logs from the city’s firewall, proxy, active directory, routers and switches, and much more. These tools enabled his team to collect and report information, collaborate with other departments and organizations internal and external, and promote threats to a higher visibility.

    Screen Shot 2015-11-20 at 10.22.36 AM

    Check out the recording and slides to learn how Tim sold the program internally (such as using executive dashboards), what key lessons he learned, and what resources (including specific analyst reports) he used to make his decision:

    Screen Shot 2015-11-20 at 10.23.04 AM

    For the full recording, check out:
    Splunk Cloud as a SIEM for Cybersecurity Collaboration

    GSN Homeland Security Award

    2015_Star-01_rgb-1If the solution needed anymore validation, it certainly received it toward the end of last year when it was announced that the City of Los Angeles was selected as a GSN Magazine Homeland Security Award winner, receiving the “Most Notable Cybersecurity Program, Project or Initiative” Award.

    All presentations from Splunk.conf2015.


     
     
    Registration and call for papers is now open for Splunk .conf2016. We can’t wait to host you all at the Walt Disney World Swan and Dolphin Resorts in Orlando, Florida: September 26-29, 2016.

    Viewing all 53 articles
    Browse latest View live


    <script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>