# Hedge-Ops Software URL: https://hedge-ops.com/ Software & Solutions. A data-driven approach to career management. --- # InSpec Tutorials URL: https://hedge-ops.com/inspec/ Learn InSpec compliance as code with this step-by-step tutorial series. From Hello World to Azure resource validation.
InSpec Tutorial Series

Learning InSpec is completely approachable, and if you follow these tutorials, you'll be writing compliance as code in no time!

## Tutorial Series - [Day 1: Hello World](/posts/inspec-basics-1) - [Day 2: Command Resource](/posts/inspec-basics-2) - [Day 3: File Resource](/posts/inspec-basics-3) - [Day 4: Custom Matchers](/posts/inspec-basics-4) - [Day 5: Creating a Profile](/posts/inspec-basics-5) - [Day 6: Ways to Run It and Places to Store It](/posts/inspec-basics-6) - [Day 7: How to Inherit a Profile from Chef Compliance Server](/posts/inspec-basics-7) - [Day 8: Regular Expressions](/posts/inspec-basics-8) - [Day 9: Attributes](/posts/inspec-basics-9) - [Day 10: Attributes with Environment Variables](/posts/inspec-basics-10) - [Day 11: Validating Azure Resources with InSpec Azure](/posts/inspec-basics-11) ## Additional Resources - [Setting Up Compliance](/posts/setting-up-compliance) - [Tour of Chef Compliance](/posts/tour-of-chef-compliance) --- # Blog Posts URL: https://hedge-ops.com/posts/ --- # How We Work URL: https://hedge-ops.com/services/ Three engagement types — Audit, Sprints, Retainer — designed to align delivery, organization, and relationships. --- # Working Assumptions: Hybrid Professionals URL: https://hedge-ops.com/posts/working-assumptions-hybrid-professionals/ Let's explore the value of hybrid professionals, individuals with diverse skill sets, and how companies often fail to leverage their unique talents due to rigid career paths. We'll discuss the need for creativity and flexibility in recognizing and utilizing these versatile professionals' strengths. Ever since I read the book [Range: Why Generalists Triumph in a Specialized World](https://davidepstein.com/range/) by David Epstein, I've felt very validated and inspired by his take on how having a range of professional experiences enhances one professional experience after the other. I [wrote about](/posts/inflection-points-hall-of-fame) leveraging this diverse skill set myself when breaking into tech, but I have had mixed luck when trying to convince employers of this benefit. The problem is that they have a hard time knowing how to handle a non-linear career path. A lot of people like the _idea_ of hiring folks who don't fit into a particular box and see the definite benefits, but they lack the skills to know how to best leverage those diverse skills. ## Working assumption The common [working assumption](/tags/working-assumptions) is that folks will do fine in the box we put them in or on the ladder we set before them. But folks (like me) don't often fit well into this standard because our skills are often out of balance on most leveling docs, and this creates a lot of frustration because you're expected to use your skill set proportionately. So if you are really strong on soft-skills but need to grow on the technical side, you're expected to hold off using your upper-level soft-skills until your technical skills catch up. This was the case with me. But my soft skills will likely always be stronger than my technical skills, so why wait? Why not leverage the soft skills now? Well, because it takes some creativity. ## There's hope I have honestly felt pretty hopeless about this particular working assumption ever getting resolved, especially in a tech career, until I met [Dr. Sarabeth Berk Bickerton](https://www.morethanmytitle.com/about-sarabeth-berk), author of [More Than My Title](https://www.morethanmytitle.com/shop) and leading expert on hybrid professional identity. She calls these folks "[hybrid professionals](https://www.morethanmytitle.com/blog/2020/1/8/what-is-a-hybrid-professional)" and identifies as one herself. She describes them as individuals who integrate multiple professional identities into a cohesive whole, allowing them to be very versatile and valuable in the workplace. Instead of merely switching between different roles and climbing a predetermined ladder, hybrid professionals combine their skills and experiences from various fields to create unique value propositions. > "Hybrid professional identity defies traditional job titles because they work at the intersections of disparate identities." -- [Dr. Sarabeth Berk Bickerton](https://www.morethanmytitle.com/about-sarabeth-berk) ## But what about the CaREeR mAtRIx? Most companies will have some sort of leveling documentation to define what growth looks like at their company. Career matrices are pretty common in tech. Others call it career pathing, career ladders, job-leveling matrix, etc. The common thread is that you start in one place and grow linearly, or evenly across all criteria until you move up to the next level. But how can the unique value proposition of a hybrid professional be realized unless the career matrix is adjusted for their uniqueness? As a hybrid professional myself, I am keenly aware of the working assumption in the large majority of enterprises that relies on the belief that all employees, first of all, _want_ to follow a predetermined career path and, secondly, can predictably develop all of the necessary skills _evenly_. For example, a company will hire a junior engineer with the assumption that they have just graduated college, don't have a lot of work experience, and need to grow their soft-skills to the same degree as their technical skills. This assumption is built into the career matrix which the employee is then judged against. There is no room in the system for the employee's technical skills to be out of balance with their soft skills. They are expected to deliver and grow completely in balance. But what if you hired someone from a code camp who is entry-level at software development but has had another couple of successful careers in other fields? They already know how to work with other teams, manage projects and people, and shepherd products. They just need the technical skills to be able to do all of those things more successfully. Are you really going to make them follow the same career matrix as the person that just graduated college? Should you assume that they don't have the desire to use those advanced soft-skills at all until their technical skills catch up? Should you assume that their soft-skills won't be valuable to you until their technical skills are equal? Well, according to Dr. Berk and my own personal experience, you will be missing out on the unique skill set this person has to offer, and they will get bored, feel unvalued, and likely move on. ## Solution As an individual contributor, what I have done in the past to scratch this itch of wanting to deliver beyond just my technical abilities is to simply offer the company those other skills to use. I would proactively come up with ways that I could contribute and communicate those ideas very clearly, and either the ideas were gladly accepted or the leader would get nervous about me getting out of the box they put me in. But either way, the ambiguity of whether or not I could use these skills was removed and I knew my next move. The times this really worked most effectively in my favor was when I was at a small startups of less than 75 people; the fewer the better, honestly. As a company grows, their creativity and flexibility tend to diminish. Nevertheless, the employee needs to be clear about what their value is and offer, offer, offer. It's then up to the employer to take it or leave it. As Dr. Berk [points out](https://www.morethanmytitle.com/framework), successful teams need all types of workers to be high performing: - Experts and specialists - Generalists - Hybrid professionals The employer needs to allow hybrid professionals to show their full value, and if that requires a little creativity, then they should trust that the extra effort will pay off in the long run. The alternative is that they simply will not be able to retain these workers because they will get frustrated and move on. This requires creative leaders willing to adjust career matrices based on an individual's unique offering. ## Conclusion I have, of course, loved working with absolute experts and jacks-and-janes-of-all-trade-generalists, as well. They're so good at their crafts, have all the answers, and I've learned a ton from them. But I have also experienced the fun of working with other hybrid professionals, people working at the intersections of their various expertise and experiences. I hope you get to as well because there is a lot to learn from them, too. If you want to get better at fostering this creativity in your own career or on your team, [Dr. Berk's resources](https://www.morethanmytitle.com/) are a great place to start! Another good thing to do would be to start becoming intentional about recording the unique contributions on your team and assessing what each individual has to offer. Then reward that uniqueness! --- # Working Assumptions: Performance Reviews URL: https://hedge-ops.com/posts/working-assumptions-performance-reviews/ Performance reviews are crucial for career growth and development. This blog post tackles common false assumptions and highlights how intentional, consistent documentation can transform your review process and career trajectory. The next stop on our tour of problematic [working assumptions](/tags/working-assumptions) is performance reviews. When review time rolls around, are you the “_game on!_” type, the one who has all their notes and insights ready and updated and all you have to do is put your outline to prose and submit it? Or do you find yourself waiting until the very last minute, writing out some bullet points that you thought of off the top of your head, then asking ChatGPT to make it more flowery and submitting that? I’ll admit that I have been on both ends of that spectrum before. I have only fallen on the procrastination side of the spectrum a time or two (and not when I was a manager), but when I did, it was likely because I had lost faith in some aspect of the process because of some working assumption that I was holding onto about performance reviews. Let’s take a look from both the direct report and manager perspectives. ## Working assumptions held by direct reports Here are some _false_ working assumptions that we sometimes hold as direct reports when it comes time to do our performance reviews (and maybe even applies to peer reviews, too): - _It’s just a formality._ Nothing will really change as a result of this exercise. - _Only my manager’s opinion matters._ The review is solely based on the manager’s perspective. - _It’s only for negative feedback._ So if I did fine, I can keep it short. - _It’s only for raises and promotions,_ and I just got one, so I can keep this one short. - _My manager already knows what I did,_ so I don’t have to go on and on bragging about myself. ### Reality The reality of doing self reviews is that they absolutely matter. When taken seriously, reviews can significantly influence career development, compensation, and growth opportunities. Compensation and advancement decisions, of course, may be influenced by broader organizational factors and not just individual performance, but we need to ensure that we’ve dotted every _i_ and crossed every _t_ when it comes to our self-reviews. This is our chance to shine and use all the tools at our disposal to do so, like keeping [good notes](/posts/working-assumptions-one-on-ones), hype sheets, solid goal tracking, etc. If you’re not proactively communicating your achievements, then who will? This is something, ideally, that you’re doing throughout the year but especially at review time. Review time is also a chance to communicate your next goals, showing that you are always forward looking. Additionally, a lot of our feedback can come from peers, so [nurturing those connections](/posts/working-assumptions-networking) is so important. I worked with someone once who was very intentional about this. She would have an after-project retrospective as a rule of thumb, where she would meet with the other folks on the project individually and ask questions like, ‘_what went well_’, ‘_what didn’t_’, ‘_what were my strengths in this project_’, ‘_how could I improve_’, etc. She was proactive about learning from projects with her teammates, so when review time rolled around, I knew exactly what her strengths and accomplishments were because I had the retro notes! So smart. ## Working assumptions held by managers Of course, managers can hold all the same faulty working assumptions as direct reports, but when you’re reviewing someone else, even peers, our assumptions may have another layer, such as: - _It’s only for criticism._ I should use this to only highlight growth areas. - _It’s only for encouragement._ I don’t want to demotivate my team with a bunch of criticism. - _Let’s get this over with._ This is the only time of the year I have to deal with this. - _Everyone understands the criteria and how they’re being evaluated._ I’ll just take the direct report’s self review and use that. - _Metrics and quantitative data are the only things that matter._ It’s just about the numbers. ### Reality The reality of reviewing direct reports or peers is that while all of those assumptions above are absolutely false, we also only have so many hours in the day and things fall through the cracks! - Yes, reviews should absolutely highlight achievements, recognize strengths, and discuss development opportunities. - Yes, managers need to ensure that performance criteria and expectations are clear and communicated regularly. - Yes, constructive feedback, delivered thoughtfully, is crucial for growth and improvement, and qualitative factors, such as teamwork, attitude, and potential, are equally important. But what a lot of us fail to remember is that effective performance management is _ongoing_, with regular feedback and check-ins and lots of notes taken at each. If you know me personally, you may have picked up on the fact that I am extremely forgetful. But because of that, I write _everything_ down that is important to remember. The act of writing it down (or typing it) helps to lock it into my memory and obviously gives me a record of it for later. Intentionality is key here. ## Bias of the present Whether you’re a direct report or a manager, it’s important to remember that keeping studious notes on everything prevents us from falling prey to the bias of the present. How many times have we all found ourselves sifting through a year’s worth of user stories, git pull requests, docs, or whatever to try to find what the heck we or our direct report’s worked on in the past year to put it into a review only to give up and put whatever was freshest on our minds? ## Does note-taking really work? _“But Annie, you just said to take a ton of notes, and now you’re saying I won’t look at them?”_ Let’s get real for a sec. Yes, on one hand, note-taking is low-hanging fruit. Doing it will absolutely set you up for success, but it won’t guarantee success. You have to be realistic about this mass of data you’re collecting. How are you going to aggregate it to make it meaningful? Well, my suggestion is to make it part of your job to go through those notes, summarize and aggregate them into meaningful points every 6-12 weeks or so. Then you’re really ready when review time comes around. But what if you had an app to help with that! ## We’re here to help We’ve been working hard at creating software to help with exactly this situation. Even when you have great note-taking tools, you probably feel overwhelmed by the sheer mass of notes you’ve taken for yourself and that of your whole team, and by the time you get around to aggregating the successes, you’re exhausted. We want to help you to not let important insights fall through the cracks. We’re here to do the heavy lifting. We have found the landscape of people management tooling to be abysmal simply because it wasn’t actually built with you in mind. It was more likely written with HR and their goals in mind, which is fine as they have certain criteria they need to track, too. But the distinction of who a tool is built for makes a lot of difference. What if you had tooling written specifically for your use-case and goals? ## Conclusion Our vision is to change the way people look at career management tooling from simply a box to check off assigned by HR to a tool for themselves that empowers them to create more autonomy, safety, and control in their careers. Along with the other things we’ve talked about in this series, [1:1s](/posts/working-assumptions-one-on-ones), [360s](/posts/working-assumptions-the-360), [networking](/posts/working-assumptions-networking), we want to really revolutionize the way we do performance reviews. --- # Working Assumptions: One On Ones URL: https://hedge-ops.com/posts/working-assumptions-networking/ One-on-ones are arguably the most effective tool you have to communicate your wins. Let’s discuss some of the unhelpful working assumptions we have about them and learn how to best leverage the one-on-one for your success. In this current blog series called [Working Assumptions](/tags/working-assumptions), I believe the most fuzzy and unhelpful working assumptions we hold are about the one-on-one (1:1) meetings between managers and direct reports. Many of us struggle with 1:1s and can’t connect how those meetings connect us to our goals. Some of us just show up and vent. Others avoid them. At Hedge-Ops, one of our main goals is to accelerate your career journey through clarity in your professional relationships. Let’s talk about how to get the most out of our 1:1s. We may not really know if our bosses are our fans and advocates or not. This can be especially problematic if we’re friendly and get along well with them. It can confuse things even more! And then it may be difficult to know if our direct reports are doing good work and meeting business objectives or just talking a good talk. All of the uncertainty can lead to the 1:1 turning into something unproductive for both parties.Let’s take a closer look at some of these assumptions we often make and what to do about them. ## When you’re the direct report When I [first started](/posts/leaning-in) my career in tech, I was a career changer and had just come out of a long stint at home with kids. I had huge amounts of imposter syndrome, but I knew I couldn’t let it consume me. I knew that building my confidence was absolutely key to my success. Because of that, I took fastidious notes on my progress every dang week. I was determined to show the company who took a chance on me that their gamble was paying off. I knew that seeing my progress documented would only net positive results. I was very, very right about that. My confidence grew along with my skills, career trajectory, and salary. I had the fastest and highest percentage of growth in my career during those first five years in tech, and I attribute it greatly to my ability to communicate my growth and contributions. About four years in, I [switched companies](/posts/end-of-first-chapter), and I started out at the next company with the same enthusiasm for showing my growth and contributions. Somewhere along the way, though, I became a little more casual about it. I started assuming that I could just talk off the cuff about my progress in my 1:1s instead of writing it out. The result was that I never really quantified anything in a way that my manager could then take to his leaders and say, “Wow, look what Annie has been doing.” We just weren’t able to translate what I was talking about in 1:1s into common goals that we were both able to accomplish. I was being just a little too casual for that. I can directly correlate my growth opportunities and salary over the years to how well I managed my 1:1s with receipts for my contributions. That first year at the new company, I saw growth in the form of a promotion, salary increase, and getting to work on good projects. After I got lax in the communication of my growth and contributions. Lots of complaining crept into my 1:1s, and soon my opportunities dried up. Please note, nothing changed in my output, only in the communication of my output. What happened was that I was very friendly with my manager, using our 1:1s to wax philosophical on all things DevOps and leadership. Was it fun? Sure it was. I got along really well with my boss. Was it helpful to my career? Not really. Outside of the 1:1 I felt frustrated and like my opportunities were running out at that place. Yes, I think having casual and unstructured chats with your boss is very important but not in the absence of communicating your growth and contributions. ## Direct report working assumptions You certainly have your own story that may end with slightly different unhealthy assumptions. They may be something like: - 1:1s are only for negative feedback, so if we’re not having a 1:1, then all is good. - 1:1s are the manager’s responsibility to drive the agenda. - 1:1s should be limited to talking about the current project. - 1:1s are only for evaluation purposes, so I can’t talk about anything other than my growth and contributions. - Once you discuss things at a 1:1, everything is resolved and my boss-person will handle things and I don’t have to set action items or follow up about it. ### What direct reports can do instead And sometimes we don’t mean to form these assumptions; they just happen over time because we get busy and lax with our discipline of having really neat and organized 1:1s. When it all comes down to it, though, having effective 1:1s as the direct report is simple: - Clearly state your goals for growth and contributions. - Note the progress you’ve made on those goals, no matter how small. - Show the aggregated progress quarter after quarter. - Have it all documented and easily accessible. Even though it’s simple, it sounds like a lot of work, I know. But this is the main way your boss knows what you’re doing. We can sometimes assume that our performance is being witnessed, but it’s usually not. But if you document all of this for our 1:1s, then this is a good thing because you get to be in charge of the wording. You get to market yourself and be your own biggest advocate. You get to create a record of how you’re growing, contributing to projects, helping other people, and meeting your goals, goals of which obviously align with company goals because you are able to wordsmith them in a way that does. No one else will do this for you. If you’re having a hard time with this because it feels like you’re bragging about yourself, you can take the ego out of it and just make sure you document things like: - Cross-team collaboration (especially when you’ve been praised and have screenshots to prove it) - Your learning and how it will benefit the company - If you’re mentoring someone, either formally or informally And now, if you’ve been doing this, you have a wealth of information from which to base your performance review on! You can even summarize each quarter to stay on top of performance review preparation. ## When you’re the manager Conversely, if you’re the manager, you may not know how to get the most out of our time with your direct reports. I’ve been on both sides of the meeting, and when I’ve taken fastidious notes, I’ve never been disappointed or felt like it was a waste of my time. When I actively took notes on how my team was meeting their goals for growth and performance, it revealed a lot about how they worked. In addition to tracking the progress on their goals and performance, I was looking for things like if they drove the conversation or I did, if they were creating goals because they wanted to grow and contribute or because they were trying to appease me. I followed the same general template with each person, but I looked for whether or not they took charge of the conversation. It looked different for every level, of course, but I wanted them to take the reins. ### Manager Working Assumptions We can certainly hold all of the unhealthy working assumptions that the direct reports hold with the addition of a few more, such as: - The direct report should drive the meeting (I think this depends on the level of the direct report). - 1:1s are primarily for getting my direct reports unblocked and making progress. - 1:1s are so that my direct reports can vent and get things off their chest. - I will leave it up to my direct report to track their progress and goals. It’s their job. ### What managers can do instead If a manager isn’t as engaged as the direct report in 1:1s, then the direct report will get little out of it. You’re there in service to them, as a leader, to grow the people on your teams, not simply to make sure your projects are on track. If 1:1s are boring, then maybe you’re doing them too often. If they’re too action packed, perhaps you lack a process to communicate more frequently. This relationship should not, of course, devolve into a therapy session, but we are here to solve problems. Additionally, tracking performance, even and especially for top performers is vital! People avoid this and it always bites them, whether they are looking for performance justification for a promotion or need justification for a performance improvement plan. Write it down and make it clear. If you can’t write it down in a way that both people can see it, you’re not communicating clearly enough. And help them with their goals. Ensure that their mid range and long term goals are talked about. Are their values included in your conversations? What is important to them, and how can you help them align that with company values and goals? ## Feelings matter How you feel about your 1:1 is important. We’re taught to be so objective in business to combat biases, and I agree with this. But when we discount our feelings about certain interactions, I think we leave out an important part of our humanity that may have a role in our interactions. I’m not talking about letting unhealthy biases come into play, but rather I’m saying that you should listen to certain intuitions that you may be having, such as: > He listens to all my ideas but never gives me opportunities to implement them. I wonder why. I’m going to see if this becomes a pattern so that I can take action if it does.
> She says that she wants to see me grow in this area, but I’m worried that she’s not going to give me the time or space for that.
> They’re always praising me for my team work, but I still am not getting considered for team lead. That feels crummy. What can I do about it? Positive feeling are important, too: > She was so positive about my project and shared my success with the VP. I think she’s a real ally.
> She’s always asking about X. I think that’s not that important, but I guess it is to her, so I’ll keep an eye on it.
> When we’re summarizing our 1:1 notes, we can aggregate our feelings over time, too, and it will show us valuable insights. ## Conclusion As with anything, intentionality is key with 1:1s. They are a tool, just like the other topics we’ve discussed in this series, and when they’re used intentionally, you will get the most out of them, no matter which side of the meeting you’re on. I would love to hear any questions you have about this! --- # Working Assumptions: The 360 URL: https://hedge-ops.com/posts/working-assumptions-the-360/ What will it take to make 360 reviews more beneficial to the employee? What are the real goals of a 360 review? Let’s rethink what it means to grow and develop employees. My [last post](/posts/working-assumptions-introduction) was an introduction to a series I’m calling “[Working Assumptions](/tags/working-assumptions)”. I want to tackle some working assumptions that we may be working by that aren’t serving us very well. Today’s topic: _the 360 review_. ## Do you know what a 360 review is? It’s when your boss wants to get feedback on you for the purpose of your growth and development, so they submit an anonymous survey to 5—15 folks, give or take depending on your position, in positions all around yours on the org chart, some above, some below, some peer to you. These folks will then answer a few questions about you and submit it through HR’s (Human Resources) platform of choice. The manager will likely summarize all that data, create a report, let you know what the outcome was, and then make the report available to HR as part of your employee record. More or less. ## What’s the problem with that? I’ve seen a 360 referred to as a “Talent Development Tool”. Notice what’s important about that phrasing. It’s subtle, but it infers that the user and ownership of the tool is by a manager/boss person. And, yes, that’s great if a manager wants to grow their team. This is what we want—growth-enabling managers. But this, in my humble opinion, is where the industry gets it slightly wrong. When the ownership of the 360 is obfuscated, then is the employee really benefiting? That begs the question still, if the employee doesn’t benefit, then can the owner of the 360 even benefit? (It depends on their real goals.) Therein lies my first problem with 360s. _How can something designed with the manager/boss person in mind benefit the actual subject of the 360?_ ## The real goal of a 360 I’m definitely not arguing that no 360 has ever benefited the employee (i.e. the talent). I’m just arguing that the employee’s growth is not the first goal of the 360. The first goal is that _the company understands this employee’s standing in the company_. The _secondary_ goal is (maybe) to “develop” this employee based on the feedback. However, if the first goal is to understand the employee’s standing, then we have to assume that the employee’s development may not be the next goal. For example, movement of the employee out of the company may be the next goal. Therefore, for development to be a true goal, this would have to assume many things, such as: 1. The feedback was not for rationalization for a layoff or firing. 2. The company knows how best to take actionable measures toward the growth of this individual based on this feedback. 3. The company intends to take those actionable measures. 4. The feedback is accurate. And unfortunately, there is no way to guarantee any of these things. ## What is really necessary to grow and develop employees? I believe that true growth in any area of life, including professional, cannot happen unless two very important things are present: _honesty_ and _vulnerability_. [Brené Brown](https://brenebrown.com/), the expert on vulnerability, describes it as "uncertainty, risk, and emotional exposure." This, in my estimation, is incompatible with anything that will ever be exposed to anyone in HR. This is no offense at all to the good folks at HR. They are doing their jobs, but it’s just that HR is not the place for vulnerability. It’s a place for compliance and risk management. It’s a place for the company to understand its employees, not for employees to understand themselves. It’s a place for the company to grow, not for the individual to grow. And we intuitively know this, right? This is why we can assume that the 360 feedback is not going to be 100% accurate or helpful because we probably aren’t going to be 100% accurate or helpful when we provide reviews for others. Most of the time, we’re probably just nice, especially if we’re reviewing our work-friend. Many of us don’t feel like we can be completely honest, because we know that whatever is in that report can be used as a reason to include them in the next layoff, whether it’s warranted or not. We avoid the risk of any of our words being twisted against our work-friend. We know that there can always be another layoff right around the corner, especially now. And maybe you totally disagree with that assessment. If so, you may be at the leadership level. 360s tend to be more helpful with directors and higher where the playing field is more competitive and people tend to be more honest in their assessments of others. I have a feeling that HR is probably equally frustrated about the aggressively-competitive nature of this scenario, too. ## What can we do instead? How can we have 360 feedback that we can trust is really honest and vulnerable that will lead to growth? Feedback that you can request from others on your own, when you feel like you need it? Feedback that is anonymous where you really can’t tell by the wording who wrote it? And what if you then decide how you want to take action on your own, regardless of company involvement. Do you think you’d be able to expand what the possibilities for growth look like if it isn't tied to your HR report? Do you think that such an autonomous and liberated approach to career development is allowed in the workplace? Well, I believe that it should be. Everyone’s stated goal is for the employee’s growth, right? Here at Hedge-Ops we want to enable real growth, not just check boxes. We want to be honest about what it takes. If you’re curious about what we’re building, stay tuned! --- # Working Assumptions: Introduction URL: https://hedge-ops.com/posts/working-assumptions-introduction/ An introduction to the “Working Assumptions” series, where we will tackle some assumptions about career growth that we may be working by that aren't serving us very well. For my next blog series, I want to talk about some _working assumptions_ about career growth and development that we may want to reconsider. There are many programs in place for our growth and development, most of them managed by our companies’ HR (human resources) departments. On paper they seem like good practices, very well thought out and thorough. But for some reason that we have a hard time pin-pointing, many of those programs just aren’t working right, producing mixed and often subpar results. And I’m not blaming anyone, especially not HR. In my experience, I have found HR to be full of good-hearted people trying to do the right thing for the company and its employees. They know that growing employees benefits the company and creates the kind of company culture that they want to foster. And I think that they are just as frustrated as the rest of us when they don’t see the results they expect from these programs and practices. Or conversely, the programs give them feedback that everything is going great, but employees are still unhappy so HR remains in the dark about problems. ## What does lack of career growth look like? Ultimately, the biggest evidence that you’re not growing is that you’re just unhappy and frustrated all the time. Lack of career growth can look like a lot of things, but here are a few examples: - You’re not getting the promotions you think you deserve - You’re not getting the raises you think you deserve - You’re not getting the opportunities you think you deserve - You’re not getting the recognition you think you deserve - You’re always frustrated at the decisions of your leadership - You feel like you’re not respected or valued - You feel like you’re not growing or learning - You feel unheard or unseen Maybe we start to blame ourselves when this happens. “I just need to do that course on X,” or “I just need to show my work off more,” or whatever. The reality, though, is that the system in place was built with the bottom line in mind first, and sometimes the employee’s growth can slip through the cracks. Sometimes those are aligned, but when they’re not, there is a clear winner and it’s not you, sadly. ## So what is career growth, and do we really need to be growing all the time? This is another whole post, maybe even a book, but my short answer is no, you don’t have to always be growing, but it helps if you are intentionally being who you want to be. So whenever I say _career growth_ in this series, let it mean for you: > the intentionality of being who you want to be inside your career ## What are working assumptions? These are the assumptions that we make about the things that we do for our careers, either of our own volition or demanded from our companies, that are assumed to produce career growth or development. We really want to do what’s right and continue to have forward momentum in our careers, but many times, something just doesn’t feel right or add up. We often don’t question the system that leads us down the path of career growth because, hey, it got us this far, or hey, this is just how you make sure that a lot of people at once are growing in a big company. But what if the system is broken? What if the system is actually holding us back? ## What will we cover in this series? I want to cover a few things that I think are working assumptions that we should question. Here are a few topics I have in mind: - The 360 review - The performance review - The job search - The networking event - One on ones - The mentorship program - _And whatever else I hear from you all!_ I do seek to inspire you through this series on how to overcome and create a trajectory that is all your own. Our commitment to helping others succeed is our main driver. We have mentored so many folks through these same issues, and we want to share what has worked and what hasn’t. At Hedge-Ops we are thinking about these problems as opportunities, and we’re building solutions into our software. We want to share some great individual strategies and push for systemic change when needed, too. Stay tuned! --- # Networking Rules for Job Hunting URL: https://hedge-ops.com/posts/networking-rules-for-job-hunting/ Job hunters need to follow these rules to realize their full potential. For many years in the days of low interest rates and seemingly endless investment, looking for a job was a matter of barely telling anyone that you _might_ be thinking about _maybe, one day_ going somewhere else. All of a sudden, numerous companies and recruiters would be pursuing you, even fighting each other, to get you to come work for them. In the first half of 2024, it’s clear that those days are over. And it has exposed the problems in the typical job search, leaving a multitude of people unemployed and nowhere to go. There are rules to looking for a job. These rules have always been the rules. In the good times, we can avoid these rules and get away with it. In the bad times, well, it’s time to listen up and pay attention. I would even argue that those who don’t follow the rules in the good times limit their opportunities and even set themselves up for a future layoff. Here are the rules: ## Grow First, _build your external network_. Don’t wait until a position is open at someone’s company. It’s too late at that point! Instead, have lunch or coffee with someone in your network, listen to them, and get curious about what you can do to help them. Sometimes helping them will be listening to them. Other times it might be giving them an insight that might help them make progress. Whatever it might be, when you’re there for people and build authentic relationships with them, your connection to other humans will be an asset when you’re ready to make a change. And if you don’t have a job right now, this is even more important, but it’s also even more important to refrain from seeing everyone as resources you can extract leads from. Instead, believe deep down inside that you have something to give others and commit yourself to finding that something. I promise you the job will come. ## Balance Second, _be intentional about the structure of your network_. Most people I know network quite haphazardly, and apps like LinkedIn actually encourage this. A connection is a connection is a connection, right? Wrong! Instead of the spray and pray method of networking where you connect with as many people as possible, try this: - Find five to ten people who are ahead of you in your journey that can guide you. - Find another five to ten people who are your peers that you can bounce ideas off of. - And find five or ten people who you’re ahead of, whom you can advise and mentor. When you do this, you quickly find that you have fifteen to thirty people you regularly interact with. If you really connect with these people and have a mutually beneficial relationship, you’re one or two degrees of separation from a ton of jobs. Focus your efforts and stay strategic. ## Nurture Finally, _keep your network warm_. In other words, you need a system in place to communicate with these fifteen to thirty people and ensure those relationships stay vibrant, positive, and mutually beneficial. You have to be intentional about relationships for them to thrive. Otherwise you are a random person coming out of the woodwork when you need something. That is not valuable at all. In fact, it can have a negative impact on a relationship. Annie knows a person who has only reached out to her when they need a reference to a job. Do you think that’s a positive interaction? No! She has slowly turned from thinking positively about the relationship to now thinking that the person only sees her as a means to an end, and that’s it. And that makes her sad. She liked that person. Don’t be like that! ## Conclusion When you follow these rules, job hunting is less about resumes, interviews, and position openings, and way more about people, people, and people. Those who follow these rules reach their full potential and help a lot of people along the way. If you would like help with implementing these concepts, Annie and I are developing an app that will help you build your network, get intentional about balancing it the right way, and to ensure that you make your relationships with people in your network valuable and consistent. If you’re interested in learning more, we’re starting an early access program for a limited group of people. [Contact us](/contact) and let us know! And [follow us on LinkedIn](https://www.linkedin.com/company/hedge-ops-software-llc) to hear more about it. --- # When Preparation Meets Opportunity URL: https://hedge-ops.com/posts/when-preparation-meets-opportunity/ Unlock your career’s full potential! Learn the art of setting and tracking goals for continuous growth. A big thank you to everyone who followed along in the [Career Inflection Points series](/posts/inflection-points-introduction) and for all of the kind comments and DMs. Reflecting on such milestones reminds me of the famous quote: > Luck is what happens when preparation meets opportunity. I felt very lucky when I got my job at [HashiCorp](https://www.hashicorp.com/), but if we keep using the inflection point imagery, it was really just a continuation of the curve created in one of the previous inflection points. I also interviewed there four different times, twice with the same team, and just finally landed on the right timing with the right group of people. So I decided to end the series there and pivot to some other ideas that have been floating around in my head. I have been thinking a lot lately about that preparation aspect of luck and how that’s been a sort of secret weapon in my own career growth. Here’s the TL;DR: it doesn’t matter how good of an engineer you are unless you know how to track your goals in order to show growth quarter after quarter, because guess what-no one else really knows what you’re doing otherwise. When I first started at [10th Magnitude](https://www.10thmagnitude.com/), they were having some growing pains, so I had four different managers in the first six months that I worked there. I was freaking out a little bit because I was in a new career and industry and I had a lot of [learning](/posts/inflection-points-learning) and growth to figure out. Not only that, but my imposter syndrome was such a daily battle that I was certain that I needed to keep a running log of reasons why they shouldn’t fire me, ahem, ways in which I added value. I knew that I needed to show growth quarter after quarter, but what was my measuring stick? If I was comparing myself to my colleagues, what they knew, how much they delivered to customers, I would surely come up short; I was the epitomy of being a noob. The only appropriate measuring stick in this scenario was to use the goals for growth in which I set for myself. But what should they be? I told you about how [Michael](/about/michael) was a [patient tutor](/posts/inflection-points-dinner) for me for in those early years. Well, he also served as a career guide and mentor to help me map out those goals necessary to get me to where I wanted to be. I had no clue about any of it, but he knew me well enough to know where I should take my career and how I should plan it out. (Disclaimer: I recommend that anyone proceed with caution in attempting to simulate this with their own partner. It’s not easy. If you can find another mentor willing to put in the time and effort, then that is probably better.) So here’s how it would go. I made my 5-10 year long-term goals, annual goals that would support the long-term goals, and quarterly goals that would support the annual goals. I would share my annual and quarterly goals with my manager and with whatever review tracking system we had at the time. Then each one-on-one meeting we had, I would share what I was doing to move the needle on those goals. That’s it. Then when review time rolled around, I would have evidence of growth to show. Do you think that anyone else was tracking my progress and growth? Absolutely not. That was my job, and no matter how good of a manager I have, it’s always my job. Your growth is the most important to you, not to anyone else, so treat it as one of your most important tasks. Sure, it’s a lot of work to stay on top of it all the time (I use the [Full Focus Planner](https://fullfocusstore.com/collections/annual-subscriptions)), but it’s just like keeping a [budget](/posts/you-need-a-budget). At first you think that having a budget is restrictive and such a pain, but eventually you see that it actually allows you more freedom than you had before because you have greater control over where your money is being spent, which allows you to spend it on the things that matter most to you. Goal setting is just budgeting for your time and life energy. While the Full Focus Planner has been great for planning, we’ve missed something that would help us truly track and make progress in our jobs and with our network by being more integrative. We are in the early stages of creating that solution, and if you want to make progress in those areas, let us know. We’d love to talk to you! --- # Recognizing People Who Do the Right Thing URL: https://hedge-ops.com/posts/recognizing-people-who-do-the-right-thing/ We often underestimate the level of sacrifice a leader must make to do the right thing. We want to recognize people who are making those sacrifices. Annie’s [inflection points series](/posts/inflection-points-hall-of-fame) brings up so many memories and emotions for me. We worked really hard together to accomplish the goal for her to go from Casting Director to Cloud Automation Engineer. We’d stay up until midnight most nights after the kids went to bed to do a crash course on everything IT. As she networked and worked to show her value, the people we thought would see that value didn’t return her calls when open positions were available, and the people we didn’t expect would be interested did. She faced an uphill battle on many fronts, and I wasn’t prepared for how complicated, difficult, and frankly unreproducible it was. But I had reproduced those same types of career transformations before at work. From the time right after the 2008 crash that I put my new project on hold, so I could teach Craig unit testing as he transferred from QA to Software Engineering, to the time that I hired TJ who had learned software engineering from a rural outsourcing company in the midwest, to the time I hired Megan from DevOps Days DFW as an intern before she totally solidified one of our core products, to the time that I worked with Daniel to shift his skills from on-prem to Azure. This is something I do, and this is something I'm good at: I help people grow past what they see is possible. Then I take a step back and see all the forces that made those results almost impossible for me to meet and virtually impossible for almost everyone in our industry. People shouldn't have to fight that hard to do the right thing. Helping Craig enabled him to change the trajectory of his career into software engineering, and it ultimately laid the foundation for the product we wrote together which combined my engineering expertise with his domain expertise. It came at the cost however of delaying the ultimate results of the project, which caused me to miss having a central role on the next wave of products at my company. Helping TJ helped him become a solid and impactful software engineer and contributor even today (we are still friends). And it helped me get out of the _mad scientist_ mode I had been in, forcing me to explain my ideas to someone new and having empathy with his challenges. But it came at the cost of extending that project, while my peers were executing quick wins that gained them notoriety and made me an outsider to some people. Helping Megan find a path into tech enabled a team she was embedded on who was stuck go from a lost strategy in an on premises data center to an automation-driven cloud strategy in Azure. She brought people together and helped them breathe life into their own careers. I didn't know what I was going to do about that product before Megan transformed it. But it came at the cost of my peers thinking I was crazy by hiring someone with a marketing degree to write our Chef. Helping Daniel find a path from being overworked late at night into an Azure architect helped us converge the product expertise and cloud strategy that most places can't pull off. And it transformed his life as his employer went from taking advantage of him to partnering with him. But I had to endure eighteen months of my boss at the time thinking I was crazy for doing this and was just blind to the reality that old dogs can't learn new tricks. It turns out they can, if you give them time, which nobody does. I didn't have a lot of support and recognition on my path to helping the above people and others. I was left with my deep desire, instilled in me by my deeply religious parents, to _do the right thing_. I get immense personal satisfaction from this. But I am also frustrated that relying on people’s ability to ignore rewards and advancement in the face of doing the right thing is not a plan for success. The forces aligned against a manager doing the right thing are simply too great. We want to do something to change that. We want to find people with stories like mine: making sacrifices to do the right thing, helping people grow, hiring balanced teams and helping them transform their careers. We want to recognize these people. We want to make a page that says to their boss, employer, and network what I wish I could have sent to my boss a decade ago: “This person is showing strategic and transformational leadership. You should promote them!” That type of message would have made a huge difference to me, and it’s not too late for us to make that difference in someone else’s life and career. Could you share with us people you know who are doing the right thing for their people, helping them grow and transform their careers? Email us a person we can recognize in this way to nominations@hedge-ops.com. We want to tell their story and change the reward calculation leaders in our industry are forced to make. If you can’t think of anyone, can you email us anyway with your thoughts on how you think we should tackle this problem? Perhaps the problem is deeper or different than we think; we want to know that, too. --- # Career Inflection Points: 10M Hall of Fame URL: https://hedge-ops.com/posts/inflection-points-hall-of-fame/ In this blog post, Annie gives a shout-out to some amazing colleagues who went the extra mile to help her navigate the challenges of entering the tech world. This post underscores the value of believing in newcomers and investing in their growth. I love this quote that my former colleague, Matthew Sanabria, sent me from a talk that Thomas Boltze did at GopherCon this year. The culture at our companies is, indeed, the set of behaviors that get rewarded, tolerated, or sanctioned, so in the spirit of rewarding behavior that I want to see more of, I’m going to brag about some awesome people and then invite you to do the same by sharing with us hall-of-famers you know. I’ll tell you how in the call to action section. I talked about the inflection point that was brought about by digging into learning in order to create better outcomes [here](/posts/inflection-points-learning), but before moving on from this chapter of my career, I wanted to spend one more post talking about how the people and culture at [10th Magnitude](https://www.10thmagnitude.com/) (10M), my first tech job, created some really great inflection points that simply aren’t very probable at many other companies that I’ve seen. I want to call out some of my former colleagues who went above and beyond, lived out their values, and were there to help me through some tough learning, when the tutorials just weren’t cutting it. ![10th Magnitude Pics](/article_images/inflection-points-hall-of-fame-reel2.png) ## The Setup I think consultancies are prime places to learn. It was total kismet that an Azure consultancy from Chicago [hired me](/posts/leaning-in). First of all, Azure is arguably the easiest cloud to learn of the big three. Secondly, I really love Chicago, its honest and fun people, and its glorious food scene. Thirdly, a consultancy is solving the same types of problems over and over in different companies, but the problems are different enough that you’re always learning. And I certainly was. I loved my time at [10M](https://www.10thmagnitude.com/). I loved that it was a startup and out-of-the-box thinking was encouraged. I loved that if something needed to be done that I could knock out, I didn’t have to go through an approval process that could take weeks; I just did it and someone would actually thank me for it. I loved that I knew my CEO and that he was an all-around awesome person. I loved that there was a friendly atmosphere of comradery. Yes, there was a hurry-up then wait pace to things that could feel frantic at times, and yes, we would complain that the clients were asking for the wrong things, but all-in-all I’m so lucky to have gotten to work there when I did, and I look back with immense fondness. ![10th Magnitude Pics](/article_images/inflection-points-hall-of-fame-reel1.png) ## The Problem I think I’ve driven home the problem of this time of my career pretty well by now in this series. I was trying to break into an industry with the unique skill set that I had to bring to the table while I was building an entirely new one. Challenges ensued. Getting hired somewhere was a challenge. Insisting that I wanted to do engineering while everyone kept suggesting sales, marketing, and technical writing was a challenge. Learning about infrastructure from the inside-out when I only knew the outside-in was a challenge. Writing Terraform module examples of Azure architectures that didn’t exist before was challenging. With each of those challenges, however, there were really awesome people who used those challenges to create inflection points in my career journey. So I’m dedicating this post to my 10M Hall of Fame, the people who really stand out in my memory for being awesome. ## The Inflection Points: Getting Believed In by My Personal 10th Magnitude Hall of Fame ![Molly Hughes](/article_images/inflection-points-hall-of-fame-molly.png) ### Molly We all know the power that a technical recruiter wields. 10M was lucky enough to have an extremely talented and out-of-the-box thinking recruiter named [Molly Hughes](https://www.linkedin.com/in/molly-hughes-2643b92a/). She was a very involved recruiter, seeking to get to know every candidate beyond their resume. She knew what she was looking for, and she knew that it wasn’t always something that would be present on a resume. When Trevor recommended that Molly follow up with me, she could have taken one look at my resume and passed me up, but instead she had a long phone conversation with me, read my blog, looked me up on all the social media accounts to see what I was up to, and then decided to see me in person. This was exactly what I was hoping for because I knew that recruiters wouldn’t see what I had to offer by screening my resume only. The amount of recruiters that actually took the time to get to know me, however, was pretty low. But she didn’t stop there. Once she was convinced that I could do the job, she advocated for me. She believed in the value that I could add as an engineer, not sales or marketing like many had tried to pigeon-hole me into. She believed in the vision that I cast that I could leverage those soft-skills while building up the technical skills. I would have had quite a different experience if Molly hadn’t been such an integral part of 10M at the time. She is a huge reason that it was such a successful startup. ![John Shupper](/article_images/inflection-points-hall-of-fame-john.png) ### John [John Shupper](https://www.linkedin.com/in/johnshupper/) was the sales director and leader of the Dallas office, where I lived at the time, so I interviewed with him, also. He, too, was so encouraging and believed in what I had to offer. He got excited about the uniqueness of my resume and advocated for my hiring. He then continued to be a source of encouragement and motivation throughout the four years that I worked there. He believed that the DevOps methodologies that I was advocating for would lead to positive business outcomes, so he would encourage me to create online content and even co-hosted a couple of videos with me. He sold projects for me to work on and was always on the lookout for future projects that were in my wheelhouse. He was a fun and encouraging team lead for the Dallas office who never made me feel like an imposter or that I didn’t belong. He always treated me like an engineer and a professional. That went a long way for me, because, believe it or not, there were a lot of people out there who didn’t quite believe that I could pull off a career in technology. When their voices got loud in my head, I could go back to the Dallas office to reset with some encouragement and comradery. ![Scott Nowicki](/article_images/inflection-points-hall-of-fame-scott.png) ### Scott [Scott Nowicki](https://www.linkedin.com/in/scott-nowicki/) was another guy that I interviewed with who saw my potential, and, as an engineer, if he suggested that 10M hire me, he was really signing up to be tasked with helping me grow. He knew that he’d have to put his money where his mouth was, so to say yes to me was a huge show of support. There were not many projects that we had together until the one project that stands out as a huge turning point for me. We were tasked with creating [Terraform](https://www.terraform.io/) example modules of the [Azurerm provider](https://registry.terraform.io/providers/hashicorp/azurerm/latest) that would be in a [HashiCorp GitHub repository](https://github.com/hashicorp). This was in the early days before all of those modules were easily accessible through the [registry](https://registry.terraform.io/browse/modules?provider=azure). These examples were pretty new, so lots of folks would be relying on them to get started. That also meant that I couldn’t just google to figure out how to do it. I had to rely on being able to translate them from [ARM templates](https://learn.microsoft.com/en-us/azure/azure-resource-manager/templates/overview). Some of the resources that we had to create in Terraform didn’t even have resources in the Azurerm provider yet, so the project was pretty challenging. I remember working a lot to get those done. Scott was my project lead, but I remember that he was double-booked for part of it, so he was stretched thin. Regardless, he spent countless hours pair programming with me to teach me what was needed to get unblocked. There were some modules that were pretty simple and then some would take over a week to create. He was supportive and had the true heart of a teacher throughout the project. I felt his support, and I came out of that project with so much more confidence and skill than I had before. It was the best crash course in Terraform module creation that anyone could ever have, and because of his mentorship, I was able to do many more Terraform projects after this one, and even lead them myself. ## Conclusion / Call to Action And here’s where the problem lies: _helping Annie get into tech_ is not something that will likely come up in a performance evaluation or interview with Molly, John, or Scott. But that’s _exactly_ the type of behavior that made 10M the awesome company that it was, and it’s exactly what we need more of in our industry. So here I am recognizing them, and I don't want it to stop with just them. Can you let me know of other people in your professional circles who have had that kind of impact on early career people? Think about whom, and email us at nominations@hedge-ops.com. We want to start learning from and writing about these people, and in doing so, _rewarding the behavior that we want to see_. Their information would be confidential, and we would only share their story with their support. --- # Career Inflection Points: Chicago and Learning URL: https://hedge-ops.com/posts/inflection-points-learning/ When I tell the story of how I got my first job in tech, it’s always a fun celebratory vibe, but in reality I was so scared and overwhelmed. I had to make new learning goals over and over again. Such is the life of a technologist. When I tell the [story of how I got my first job in tech](/posts/inflection-points-networking), it’s always a fun celebratory vibe, but in reality I was so scared and overwhelmed. I had an immense amount of grit, energy, and desire to accomplish my goal, but then it was time to make new goals, over and over again. Such is the life of a technologist. ## The Setup As I alluded to in [the story](/posts/inflection-points-networking), my first job in tech was for a small Azure consultancy. I was hired when the company was only about 30–35 people. They hired me first on a four-week contract. It was a smart move. If I didn’t work out, they could simply move on and not bring me on full-time. This lowered the pressure for me, as well. (Thankfully, I had the privilege of already being on [Michael’s](/about/michael) health insurance policy, so that wasn’t a point of stress for me.) During the interview and onboarding process, Trevor, the person that had recruited me, had gotten his big break to work at the software company that he had been trying to get hired at for a while, none other than Chef, so he was going to be out of the picture for my training. This left me with a huge amount of anxiety. What if no one else at the company had the vision for what I was capable of, and I was left without a mentor or trainer? I flew to the home office in downtown Chicago for my first two days, and I don’t know if I looked like one or not, but I sure felt like a deer in headlights. I was dressed to the nines in a nice blazer, dark jeans, slinky white blouse, heels, and jewelry, ya know, like a typical engineer. ;) (This is the pic of me trying on the jacket I bought for the occasion.) ![A typical engineer](/article_images/inflection-points-learning-engineering-outfit.jpg) I did the tour and met everyone. I was feeling pretty good but a bit nervous and uncertain of what the next two days would look like. The office space was one of those open concept seating situations with enclosed offices lining the halls for the phone call people, so I took a workstation and got comfortable. A little while later someone booted me from my seat because that’s where he always sat. Everything that seemed to go wrong, no matter how seemingly small, would increase my heart rate tenfold. I pressed on and tried to keep feeling the fear and doing it anyway, faking it until I made it. ## The Problem In the average technology company, you will get about a month to onboard, making sure your email is set up, installing all of the software you need onto your new laptop, meeting the team, etc. Only a couple of hours into my onboarding, however, they told me to start looking at my very first project: configuring an [Elasticsearch](https://www.elastic.co/) cluster with [Chef](https://www.chef.io/). I didn’t know it at the time, because I didn’t know what was normal, but looking back on it, they threw me into the deep end. I had no idea how to configure Elasticsearch manually let alone with Chef, and I had never even heard of Elasticsearch until that day. I don’t think they meant to throw me into the deep end, but it was a consultancy, they were at the mercy of whichever deal was coming through at the time, and those things are very hard to time with onboarding. The other engineer I would be working with on this project was kind, a nice person to chat with, and we got along well as work pals, but for this project I remember him being pretty hands-off. I don’t think he knew how much one-on-one onboarding help I actually needed. In addition, I was too scared of asking for help for fear that I would expose myself as an imposter. In reality, they knew who they were hiring. Being a startup, there was no official onboarding guide. (I eventually wrote it up myself to help those that came after me.) I was given an extra laptop that was sitting around in the CTO’s office and told by the engineer to go familiarize myself with the project. I sat down to look at the code, and I froze. Everything that I had learned in the last few months was out the window. I ended up texting Michael because I didn’t know what to do. I wouldn’t call him because I didn’t want anyone to know that I needed help. I was so uncertain of what was expected for me to know and what wasn’t. I was already being hired as a risky candidate; I didn’t want to do anything to jeopardize myself. Michael would tell me what was appropriate for me to ask for clarification about and what I needed to look up on my own. I was still sweating bullets, but the guidance helped to ease some of my anxiety. ## The Inflection Point–Learning My Way Off the Bench I realized a very sobering truth in those days–that I was in a unique position to have a husband that was such a good and willing teacher. I realized that most engineers weren’t like him. I became extremely grateful for him. And if you know Michael, you will know that he is a rule-follower through and through. He never gave me answers, but he was so apt at telling me what I needed to learn in order to get the job done. He would direct me to the appropriate tutorials and sit with me through some of those, answering questions and guiding me. He was patient when I was not. Sometimes I didn’t want to hear that I needed to learn an entirely new segment of technology before I could move forward, but he was always right. I had two or three [Chef](/tags/chef) and [InSpec](/inspec) projects right at the beginning (yes, they ended up hiring me full time) with a bonus of a Terraform project (another one I learned on the fly). But after those, the configuration management and infrastructure as code jobs dried up. I ended up on the bench a lot, which gave me a great chance to do a ton of learning, but that wasn’t sustainable, obviously. They started putting me on [Azure Site Recovery](https://azure.microsoft.com/en-us/products/site-recovery) projects, which had nothing to do with the things I had been learning, so I realized that I had to be aggressive in learning more. At one point when things just weren’t clicking for me with the online courses, Michael suggested that I build my own computer so that I could see all of the components and make it make sense faster. He was right; that helped a ton. (My son still uses that computer for gaming.) I did, however, need to focus my learning and become a subject-matter expert of something so that I could be better utilized. I decided to go all in on Chef and InSpec. I got every [certification](/posts/chef-certification-tests) that Chef offered, and I did training to be able to teach Chef. It definitely helped to be better utilized! I ended up getting to teach a one-week Chef training that didn’t go so well because the client said I was too _green_, but after that I got more contracts on which to sharpen my skills. When I was good and comfortable with Chef, InSpec, and Terraform, I got a long term contract embedded on a team using those skills to bring integration testing via [test kitchen](https://kitchen.ci/) to an organization. ## Conclusion / Call to Action I saw learning as an opportunity and an invaluable tool in my toolbelt. I have always believed that I can learn anything that I need to accomplish any job. It may require patience with myself or to deliver more slowly, but it’s better to take the time to learn to do it the right way than to half-ass my way through a project. As time has passed and deadlines loom, I definitely need to remind myself of this truth from time to time. While my company may have struggled to have the time and leeway to train me themselves in those early days (I don’t fault them–it was a startup), they did allow me to hang on and learn during those times that I was on the bench (and expense the courses!). Also, when I was interviewing with them, I told them that while I was ramping up, I could use my non-technical skills to deliver value to them, and I did! I blogged. I spoke at conferences. I was on podcasts. And I helped to align their brand with one of learning, inclusivity, empowerment, empathy, and drive. And their investment paid off; I spent year two and three at that long-term client building out their Chef and Azure infrastructure. I spoke at a Chef conference with one of the engineers there, and whenever I came onsite we all had a happy hour because we all genuinely enjoyed each others’ company. ![Chef Conf Talk](/article_images/inflection-points-learning-chef-conf-talk.jpg) I talked to numerous companies when I was trying to break in, and most of them probably had a better onboarding process than the one I describe above. However, none of them would hire me. I needed a cowboy company to take a chance on me and have patience with my growth enough to see the ultimate return on investment. Most companies frankly lack that patience, foresight, and creativity. So in today’s call to action, I would love for you to think about how successful I would have been at _your_ company. Would you have had a superior onboarding experience but an inferior hiring process that would have excluded me? Do you have people on your team who would have believed in me enough to spend their cycles on me? Are you rewarding those people? Or are you rewarding the ones who put their heads down and deliver _right now_? ![10th Magnitude Branded Cookies](/article_images/inflection-points-learning-10m-cookies.jpg) _Be Magnitastic._ --- # Career Inflection Points: Networking URL: https://hedge-ops.com/posts/inflection-points-networking/ From a fateful dinner decision to networking at conferences, with mentors and passion leading the way. It's a testament to the power of community and mentorship in the tech world. In my last post in the Career Inflection Points series I walked you through a nice evening of inspiration called _[A Big Dinner](/posts/inflection-points-dinner)_ where I made the huge decision to try my hand at technology. I left that story remembering how full of hope I was. I also remember how terrified and overwhelmed I was. Today, though, I’m going to talk about the inflection points that made the hard work a little more worth it as I started to see it all come together. ## The Setup After that dinner, we got to work. When I say _we_, I mean me and [Michael](/about/michael). He is the one that designed my learning program, a very patient and wise teacher without whom I would have had no idea where to start. We started with some basic [Git](https://git-scm.com/), and it was hard and confusing and I hated it oh so much. I remember one day crying out of frustration, and Michael said, “Don’t worry, everyone cries when they first learn Git.” Just straight learning for the sake of learning was difficult and not very motivating, though, so we had to build in the two magic ingredients: a problem to solve and a sense of urgency. After getting to a basic knowledge of Git and InSpec, I had begun creating the blog [series](/inspec) on InSpec that I told you about, and it definitely had those two magic ingredients. I had created a following on Twitter composed mostly of Chef community folks, and to create a manufactured sense of urgency, I would tweet about my upcoming posts. This held myself accountable as well as served as marketing for myself ( and free marketing for InSpec). The blogging and the tweeting were going well, but the learning was still a grind. I was stretching brain muscles that had never been used before. It was very smart that Michael suggested that I combine the learning with something that I enjoyed, blogging. This took the edge off of the pain of tech-learning and let me use my writing muscles, muscles that were already in good shape. ## The Problem Still, though, it wasn’t enough to break into the industry. How was I going to know where the opportunities were? How were people going to know me and see how passionate I was about the change I wanted to make in my career and life? I was going to need to do some networking and see people in person to show them what I was made of. But how? I didn’t have a job and wasn’t yet part of that world. ## The Inflection Point–Conferences I started looking for opportunities and making them if I had to. I started talking to recruiters and to whomever would talk to me. I also started looking for conferences. Michael had heard that [DevOpsDays Dallas](https://devopsdays.org/events/2023-dallas/welcome/) was in its infancy and planned on doing their first conference that year, so I emailed them. What did I have to lose? I was so eager and desperate, and they graciously gave me an amazing opportunity to be the sponsor liaison, soliciting all of the vendors for sponsorships. I ended up meeting dozens of people through that role! It was perfect. During the time that I was organizing for DevOpsDays, [ChefConf 2016](https://www.chef.io/blog/chefconf-2016-build-deliver-delight) was just around the corner, and it was in Austin, just a three-hour drive from where I used to live. Michael was already planning on going, as his company was paying for it. I, however, was unemployed, and let’s face it–those conferences aren’t priced for folks to pay out of pocket but rather for employers to pay for them. So I had heard about a scholarship I could apply for, wrote a letter for my application, and got in! I was so excited to get to meet all the folks I had been interacting with on Twitter. (Here's a pic of Michael and I yucking it up at the photo booth at that conference.) ![Michael and I at the photo booth at ChefConf 2016](/article_images/inflection-points-networking-chefconf-2016-photobooth.jpg) And then the coolest thing happened. [Matt Stratton](https://speaking.mattstratton.com/) was working at Chef at the time, and he knew Michael from being his customer success architect (or something), and he followed me on Twitter. Well, he hosted (and still does) a podcast called [Arrested DevOps](https://www.arresteddevops.com/), and he asked me if I wanted to be on the [live podcast](https://www.arresteddevops.com/chefconf-2016/) that they broadcast from ChefConf to talk about my experience. I still can’t bring myself to listen to it because I was so green and just excited to be there, but it was such a great opportunity and a perfect example of someone [lending their privilege](https://anjuansimmons.com/talks/lending-privilege/) to someone. They saw my passion, excitement, and dedication and gave me the benefit of the doubt. ![Arrested DevOps Live at ChefConf 2016](/article_images/inflection-points-networking-Arrested-DevOps-ChefConf-2016.png) While I was there, I met [Trevor Hess](https://twitter.com/trevorghess), an Arrested DevOps co-host who [recruited me](/posts/leaning-in) to the cloud consultancy that he worked for at the time. And when I say _recruited_, I really mean _fought for_ me. I was a huge risk to a consultancy. Bench time is money down the drain. I didn’t know enough to be fully billable as a consultant yet, but he saw the potential. It wasn’t just him, either. The folks that interviewed me liked me and had to convince the CEO that I was worth taking a chance on. But it worked! I fought tooth and nail for it, but I was able to show up to DevOpsDays Dallas as a Cloud Infrastructure Engineer representing my new employer. ![DevOpsDays Dallas 2016](/article_images/inflection-points-networking-devopsdays2016.jpg) ## Conclusion / Call to Action Was getting my first job in tech in this way the easy way? Absolutely not. Was there any other way that would have yielded the same results in the same amount of time? Absolutely not. Would I recommend others do it the same way that I did? Absolutely not. There are definitely some fundamental pieces that I do recommend, but overall, I think I got lucky. I have seen a lot of people work just as hard as I did who didn’t get the big break that I did, and it’s not their fault. I think it’s largely the fault of crappy culture in the industry. People aren’t willing to incur any risk, even if the upside is far greater than the potential downside. It’s wild to think that I got my first job in tech in this way, but the math adds up after you consider the equation. If Michael didn’t have an established network that I could piggyback on and if he didn’t spend all of that time teaching me, none of this would have been possible. So what do people do that don’t have a Michael? They struggle! So why don’t more people decide to take a risk on folks like 2016-me and invest in the upside? It’s a lot of reasons, really–rather _excuses_. They don’t think they can support the new person’s learning. They are afraid that the new person will be a burden. They don’t have support from leadership to take the risk. They aren’t organized enough to know how to manage and teach the new person. In my humble opinion, I think that all of those excuses are unfounded. If you have been in the industry more than 3 years, then you should not only be growing your own technical skills, but you should also be growing your leadership skills. Even if you are learning how to lead your first solo project, part of that is learning how to impart your knowledge to others. This is a skill that is sorely lacking in technology. Growing someone so new to technology is the perfect engineering problem to solve! How better to learn how to create efficiency and performance in a system than to develop those skills in others. I urge you to start this in whatever work you’re currently doing. Is there someone junior on your team that you can practice on to build your confidence in this area? If you are wholehearted in this endeavor and treat it as important as learning the last difficult technical skill that you learned, then I promise you will both yield impressive results. --- # Career Inflection Points: A Big Dinner URL: https://hedge-ops.com/posts/inflection-points-dinner/ Explore the pivotal moment in my career journey that occurred over a dinner conversation. Learn how a shift in perspective led to a new career path and the importance of empathy in the technology. The next important inflection point in my career life happened at dinner one night. If you are following along, I walked you through how I got to that dinner by way of the kindness of [family](/posts/inflection-points-introduction), an actor who [believed in me](/posts/inflection-points-casting), and the decision to be a [stay-at-home parent](/posts/inflection-points-motherhood). Fast-forward ten years…no wait, let’s hit some highlights of those ten years first. ## The Setup I ended up having three boys in five years, and I was your typical stay-at-home mom overachiever. I went to the mom groups and the library singalongs for toddlers, had a lifestyle blog when those had first started to gain traction, ground my own wheat for my homemade bread, made sourdough before pandemic made it cool, made yogurt and baby food, cloth-diapered for a stint, was the chair of the welcoming committee for my neighborhood association, started running half-marathons and went to the gym 4—5 days a week, helped build the garden at the kids’ school, had a high school foreign exchange student; you get the point. I was _busy_ and needed outlets for my energy. At the time, I was committed to being a stay-at-home parent for the long haul, but I also had this growing desire to cut my chops in the work world and see what I was capable of. While I was learning this about myself, I had started a few side projects. I [blogged](https://www.ynab.com/blog/) for YNAB when they first started doing it. I started a home decorating business. And I made home decor pieces out of reclaimed wood. As you can imagine, none of that was very lucrative, but it gave me the itch to go back to work. ## The Problem I didn’t think that I could go back to casting because my former casting company had relocated, and I would have to start from scratch, making peanuts. I wasn’t too keen on that idea. I had been home for ten years, so I wanted to jump into a new career that would give me a better starting income that would better reflect the value I thought I could bring. That was going to be a challenge, but I was up for it. The spring semester before my youngest kid was to start kindergarten in the fall, I started experimenting. How could I use the experience that I had to transfer into another career? I started interviewing people in careers to ask them about their jobs and get an idea of whether it was possible for me to make the leap into those careers. I also started studying for the GMAT, thinking that I would need to get an MBA to make a meaningful career transition. I didn’t know what I wanted to do, but after studying for the GMAT, I knew that I didn’t want to be in school for another two years. Meanwhile, during these past ten years, Michael had been growing his career, starting out as a software engineer, then architect, then engineering manager, then director. He began a huge [DevOps initiative](/posts/intrinsic-motivators-leading-to-chef) at his company and was really successful in moving his products to the cloud, providing faster, safer, more reliable delivery of their products. It was quite a gutsy venture for him to take this on as he was at an old school, behemoth of a company that wasn’t exactly known for the popular DevOps maxims like _move fast and break shit_. Every day, he would go off to work and come back home and vent. He’d vent about not getting traction with his initiatives and how there were people just resistant to change. There was one particular sticking point that went on for about a year, and it centered around security and compliance. We’d talk it out together, and I would urge him to see how he could approach these blockers from a social and emotional perspective. What did his blocking colleagues really want? They were, after all, responsible for the security of a major point of sale software through which millions of credit card transactions were run. They weren’t just trying to be difficult. They had a lot of responsibility on their shoulders. How could he come to a compromise? How could he show them a better path forward? They had to have valid reasons for blocking; how could his solutions help them? So the DevOps transformation that Michael was pushing at his company was centered largely around [Chef](/tags/chef) tooling for configuration management to better enable cloud migrations. His company was also required to be [PCI-compliant](https://www.pcisecuritystandards.org/), and what fortuitous timing, Chef had just acquired the up-and-coming auditing framework, [InSpec](https://docs.chef.io/inspec/). I wrote about the whole fantastic transformation [here](https://sysadvent.blogspot.com/2016/12/day-3-building-empathy-devopsec-story.html), but the TL;DR is that InSpec was created with empathy in mind first. The creators knew that many security and compliance folks at the time weren’t developers and that these folks were getting nervous about everything moving to code. InSpec provided a way for them to write their audits as code, but the framework they created to codify everything was simple, elegant, and actually pretty fun to write. So while all the IT folks in organizations were moving to an Infrastructure as Code (IaC)and Configuration Management mentality, the Security and Compliance folks wouldn’t be left behind. They could have Compliance as Code using InSpec. This changed everything for Michael’s initiative. He was excited to learn how to codify a bunch of the compliance audits himself and show it to the Security folks, teach them how to do it, and change their minds. But he still faced resistance. He realized that while he was out building relationships with people in the Chef ecosystem, the Security folks weren’t a part of that world at all. They were missing out on a crucial part of the solution-people and community. Michael began to realize that his initiatives were benefiting from him being part of a thriving DevOps community. He had access to solutions, people to help and bounce ideas off of, and excitement! He realized that he needed to bring his Security friends into this experience so that they could make informed decisions from a similar perspective. So we decided to do what we do and invite everyone to our house for dinner for some good face-to-face community building. We invited the two main Security and Compliance guys at his company, Michael’s VP of Engineering, the Chef salesperson who was so instrumental in getting Michael unblocked on the Chef side of things, and the two creators of InSpec who were so humble and excited to see their product solving real world problems. Michael and I were excited, too. We saw how people were enabled to come together to solve a problem with kindness and empathy and how the design of a product enabled that, and we thought that was something really special. We had a lovely dinner where we discussed that specialness and thanked [Christoph](https://www.linkedin.com/in/chrihartmann/) and [Dominic](https://www.linkedin.com/in/dominikrichter/) for building empathy and kindness into their product. I loved how it was about so much more than just helping a major company make more money. It was about humanizing the people that make the world go ‘round and helping them to be just a little bit more joyful while they go about their work life, building a community of kindness, knowing that other people are looking out for them. ![Dinner with InSpec Founders](/article_images/inflection-points-dinner-the-germans.png) Sure the dinner was a strategic move on Michael’s part, but he just really liked this way of working—in community, with empathy, and seeing his colleagues as people and not hurdles. We invited them into our personal space to eat our food and drink our wine with the risk that our elementary aged boys could get in a screaming match at any moment. And he invited the Security people to bring all their objections to the table, literally! They had the opportunity to bring their fears, their disagreements, their skepticism, all of it, to the creators of the tooling themselves to talk it out in a setting of community and warmth. By the end of the dinner, everyone felt heard and ended up on the same page. It really was a celebratory moment because it not only accelerated their DevOps transformation, but it changed the culture in their organization to one of an automation-first mentality and even more important—empathy. ## The Inflection Point — Just Give It Two Weeks After everyone left and Michael and I were clearing the table off, and he said, “Why don’t you learn InSpec as a way of getting into technology?” Oh dear reader, I cannot express how dumb of an idea I thought this was at the time. “Uh, because I don’t know the first thing about computers, maybe,” was my response. “No, I’m serious. They say that it’s such an easy framework to learn. You could learn it, as someone who doesn’t know coding, and you could blog about your experience and report if it’s really as easy as they say.” He had my wheels turning at this point, but I still thought that I was the worst person for this job. I had never even opened a terminal before, but I didn’t even know enough about technology to use that as an argument in that moment. It was nuts. “Give it two weeks,” he said. “If you hate it, then you never have to do it again. But if you start getting the hang of it, then you can use that as a jumping off point to learn more.” I wasn’t convinced yet, but the thing I loved was witnessing the power connecting the social problem with the technical problem. It gave me a view of tech outside of what I had been conditioned to think—that it was just about sitting behind a desk all day coding. It was more than that! It was about problem-solving, sure, but if you embrace the humanness required, you can create win-win situations all the way around. So I reluctantly started learning it. The process was painful, but Michael was the most patient tutor. We would wait until the kids were in bed, and we’d stay up until 12—1AM every night. Ugh, just thinking about it stresses me out all over again. I had to start from _scratch_. The genius behind Michael’s scheme, however, is that since I had to start from zero knowledge, the [resulting blog posts](/inspec) that I wrote to teach people how to use InSpec assumed that the reader was just as uninformed as I was. This resulted in extremely simple and useful tutorials for a wide range of skill levels. Michael’s idea was pretty genius. ## Conclusion / Call to Action Part of the reason that my transition into tech is so difficult to replicate is that there aren’t many Michael's in the world. I honestly don’t know many people that are willing to sit with someone, even someone they love, for hours each day for _months_ to teach them something that may or may not work. He had a dogged determination to give me opportunities for which my education and upbringing had robbed me. He knew that I was smart enough, determined enough, and stubborn enough to make it work, but it was still a huge amount of work for him. So this is not a call to action for all the folks that want to be in tech to just work hard to learn a technology as a jumping off point. This is a call to action for all of you with the means to help someone. Is there someone in your life struggling to make ends meet that has the drive, intellect, and potential to make it in tech and make a salary that could change their lives and their family trees? I will say that being a double-tech-income family has changed our lives significantly. Our children have opportunities that we didn’t dream of before. I don’t want any of us to be knowledgeable-hoarders or gatekeepers. The more folks from non-traditional backgrounds that we can bring into tech, the [better](https://www.amazon.com/Weapons-Math-Destruction-Increases-Inequality/dp/0553418815) it is for everyone. I hope you will look for opportunities to be a Michael for someone. --- # Career Inflection Points: Casting to Stay-at-Home Parenting URL: https://hedge-ops.com/posts/inflection-points-motherhood/ Explore the journey from a casting career to stay-at-home parenting. Discover the challenges, trade-offs, and emotional roller-coaster of this major career inflection point. I’m continuing down this road of writing about all of the major inflection points in my career, in hopes that it will shed some light on the path necessary to help others with their own career transitions or for you to help others to help others with theirs. We talked about my college experiences, the boost I was given by caring family members and then the seasoned actor who spoke up for me. But today I want to talk about how some inflection points aren’t realized for a really long time and how to identify those tradeoffs. ## The Setup I entered the film industry at a really rough time in the town I was in. Work was drying up, and as a contractor, I would go weeks at a time with no work. It was rough. The frustrations associated with it being a long-tail career (where the fewest amount of people make the most money and everyone else works for peanuts) were mounting. During the five years I was [casting](https://www.imdb.com/name/nm1805484/?ref_=ttfc_fc_cr302), I went back to school part-time to get a theater minor and teaching degree in order to teach high school theater. I also did marketing for a food distributor for about six months. I ruled out both of those careers for my life because of the lens I was seeing them through at the time. I decided to continue on with casting until I wanted to start a family at which time I would stay at home with kids. After several projects that were just commercials, or films of just extras casting, or independent films that didn’t pay much (or at all), I finally got one that had a big name director and principal cast. The script came by FedEx, not emailed so as not to easily leak. I read it, and I knew that this script was different and special. It was chilling. My casting partner and I would be doing the local casting, meaning all of the smaller parts that were not cast with big names. The film was [There Will Be Blood](https://www.imdb.com/title/tt0469494/?ref_=ttfc_fc_tt), directed by [Paul Thomas Anderson](https://www.imdb.com/name/nm0000759/?ref_=tt_ov_dr) and starring [Daniel Day Lewis](https://www.imdb.com/name/nm0000358/?ref_=tt_ov_st). I would proceed to do a multi-state search for the young boy who co-starred in the film alongside Lewis, and I would get to work directly with Paul Thomas Anderson, a directorial genius. It was a dream come true; I had my big break. I was also eight months pregnant with my first child. Being 5′2″, my pregnant belly was comically large (later to find out my baby was comically large at 10 lbs 13oz). I was driving Paul Thomas Anderson and the lead casting director [Cassandra Kulukundis](https://www.imdb.com/name/nm0474697/?ref_=tt_rvi_nm_t_2) to the airport one day ( pre-Uber), and Anderson joked about how he was going to create a character for a future film after me while I sped him through Dallas traffic to get to the airport in my 2000 Honda Civic, looking like I could give birth at any moment. I loved every minute of it, but I knew it was goodbye. ## The Problem Certain things started spoiling the experience for me, making the decision to be a stay at home mom much easier to make. For instance, in my multi-state search for said little boy co-star, I drove eight hours out to Marfa, Texas to do a local open casting call since the film would be shot there. I got to the production office and found that the principal casting director had already done an open call there without informing me. Even though I was tasked with finding the child star, she took the opportunity from me, most likely unknowingly, as film production companies run at full tilt. She found him there in Marfa, and he was brilliant. I drove the eight hours back home in my Honda Civic with my husband who very sweetly accompanied me on the trip, and I felt defeated. I was letting it sink in that my career choice was quite hard-driving and go-getting, and it seemed at odds with raising a family. 18 months later we sat in the movie theater watching the credits of There Will Be Blood, and everyone in my casting group, even people that did not work on the film, was credited except for me. My decision to leave casting behind was cemented. I didn’t want motherhood to be complicated by the drama of trying to make it in film, so I simplified by staying home for the long haul. ## The Inflection Point—Casting to Stay-at-Home Parenting Leaving casting and starting a family was a huge inflection point, but it’s not one that can be put on an XY coordinate defining an upward or downward trajectory thereafter. It was a winding road through motherhood as a stay-at-home parent, full of emotions and new adventures. Having kids young was a purposeful decision we made, noting the tradeoffs, of which there are many. The major tradeoff we made was if we wanted to be focused on our careers while we were young or in our forties. We chose the latter, but there was no definitely correct answer. It was just a bunch of really hard choices to make that were laid before us, but we recognize that it is an extremely lucky position to be in when you have the luxury of many options. We weren’t certain that we were making the right choices as we couldn’t see the future. We could only hope to be so lucky as to be afforded the gift of moving forward, hoping for new and beautiful inflection points ahead. ## Conclusion / Call to Action If you or someone you love is in this stage of life wondering what to do, my advice is to breathe. You may not know the perfect thing to do, but just breathe and make decisions out of a place of peace and confidence. Fear can sometimes be a good barometer, but when it’s paired with insecurity it can be noisier than is useful. This is the last post that I will write about the pre-engineering inflection points. My goal in these last three posts was to show you that, as you know, life is complicated. People show up to decisions to switch careers with so much baggage and emotional debt that they’re usually willing to do whatever they can to make something work. That’s certainly where I was at, and I hope that you will be able to recognize the gravity of the situation when you see it in others. --- # Career Inflection Points: From Film School to Casting URL: https://hedge-ops.com/posts/inflection-points-casting/ Explore the journey from film school to a career in casting by seizing opportunities and making connections. If you’re new here, welcome! I’m doing a series called Inflection Points, and you can catch the intro [here](/posts/inflection-points-introduction). In the intro I told you that I had a couple of pre-engineering inflection points that I wanted to share with you before getting into the engineering story, so this is going to be one of those. When you get to a certain age, you look back and realize the gravity of how seemingly random things happen that totally change the course of your life. You can also see with more clarity what factors were within your control that came together to provide the outcome you saw. ## The Setup This particular inflection point happened when I was a junior in college as a film and video major. I went to a state school that didn’t have the best film program, but what it did have was scrappiness. All the professors in the department were local industry professionals as well. They often told us about job opportunities in the area and encouraged us to get as much professional experience as possible. We had several film and video projects going on all the time, and all of us students would work on each other’s projects, sometimes for group credit, sometimes as a barter for each other’s services, and sometimes just to pay it forward. We also worked in the real world together often, so we all knew each other’s strengths and future career goals. We knew who wanted to be a cinematopher, who wanted to mostly produce, who was excellent at lighting, who the best writers were, and who had the best computer to handle the big ol’ video files for editing. ## The Problem My computer was certainly not one of those. I had a hand-me-down desktop computer from my grandfather that had a whopping 256MB of RAM and ran Windows Me and AOL. I don’t remember what the specs on the video lab computers at school were, but they were Macs and much better, running Final Cut Pro and Adobe Premiere. We also had our 2GB external drives to store our video files on. Even still, it was common for the school Macs to crash while you were trying to render the video and lose all of your editing that you spent two all-nighters on. Not having grown up with computers, I was out of my league when those issues arose. I was great at editing and absolutely loved the creative process, but the technical issues were completely Greek to me. Had I had my own (good) computer, perhaps I could have learned enough to overcome the technical divide, but I don’t think it was in the cards for me. ## The First Inflection Point—My Choice So when deciding what to specialize in, I noticed there was a huge gap in casting. (Enter: inflection point.) Everyone was getting their friends to be in their films, and none of them were actual actors. I thought, “Surely there are actors in the theater department who would love to get some actual film footage for their reels who’d be willing to work for free.” So I posted flyers in the theater department but didn’t get much traction. It was time to go broader. I started contacting talent agencies in the area and asked them if they had any up and coming talent in need of footage that we could provide with our student films. They loved the idea. I started holding full scale auditions with talent from the area and casting our student films with amazing talent. It was such great practice for everyone involved. Once word got around, I was soon seeing even seasoned local actors at my auditions. They would audition for our student films simply because they wanted to stay in practice and meet young talent who would soon be in the biz. I also started using local filmmakers’ studio spaces to hold auditions for more of a professional vibe. It was all coming together, and I was having a blast doing it. ## The Second Inflection Point—Someone’s Voice I wasn’t an actor, so I had very little experience with how auditions actually worked. I had one or two P.A. (production assistant) paid gigs where I watched how casting sessions worked, but that was it. I mostly just made it up as I went along. At one of my auditions, a really lovely, seasoned local actor complimented my auditions and said they were so professional and that I should meet this local casting director that he worked with a lot. I couldn’t believe it! It was happening! It also felt so good to be recognized for the hard work and to be validated that I was doing it right. He set up the meeting, and I started interning for her immediately! Amazing! I couldn’t believe it all just fell into place. After school was over, I began working with her full-time, and soon after she made me an associate casting director. The inflection point that gave me the boost I needed was the kind actor’s belief in me. Yes, I was a hustler and knew how to get shit done. That counts. But until someone vouched for me, I was doing it all anonymously, without recognition. You all know what it’s like when you suggest someone to work for your company. If it doesn’t work out, then you end up taking a hit politically. Well, he stuck his neck out for me because he believed in me and was willing to risk taking a hit politically with someone who could choose to cast him in projects or not. I had my ups and downs in the casting biz. I did a lot of B-movies, horror films, commercials, reality shows, and a few films I’m proud of. But all in all, it was a tough time in my city at the time for the whole film industry. A lot of folks ended up moving away to find work elsewhere. But I do look back with fondness at my short time in that career. And the kind actor that helped me break into the biz was there the whole time! I saw him at many more auditions and film sets in my time as a casting director, and we always shared a kind smile and friendly conversation. ## Conclusion / Call to Action We often think that we need something bigger than we do. We think we need to catch the eye of a big leader like a Director or higher or maybe to hold the gaze of a mass of people. But in my story, I was just trucking along when someone who knew how to use his voice and his power saw me. I wasn’t trying to make it happen; I was being authentic because that’s all I can control—me! Are you out there slaying it but need to be vouched for? If so, keep doing great work! Make as many connections as you can. I cannot promise that someone will come along to vouch for you, but you will be doing all you can to make the opportunities more likely. What about you with the voice and the power? Do you see someone hustling and slaying that could use _your_ voice and power? Do you believe in them? Is the risk to your political capital low if you vouch for them? Then do it! It might be all they need to get over the hump and cause a life changing inflection point. And, wow, maybe they’ll have an awesome story to tell about you one day. --- # Hedge-Ops: Going Strong for 20 Years URL: https://hedge-ops.com/posts/hedge-ops-going-strong-for-20-years/ Join Annie and Michael as they celebrate 20 years of Hedge-Ops, sharing their journey into tech and offering insights to help others in their careers. I moved! If you’ve noticed, I’ve moved my blog from my [old site](http://www.anniehedgie.com/) to join forces with my partner in all areas of life for the last 20 years, [Michael](/about/michael), over here at [Hedge-Ops](/about)! (If you look closely, the DNS for my old site is still wonky, but don’t judge as this is certainly an area that I have not mastered.) The move was actually my idea. If you’ve been with me for the last seven years, you’ll know that the [story of my journey into tech](/posts/introduction) is a bit unconventional, to say the least. I took a total leap of faith and [gave it a go](/posts/leaning-in), and for the most part, it [panned out](/about/annie) pretty well. Over the years we’ve tried to [help others reproduce](/posts/barriers) those results in their own careers, and y’all, it’s hard! I honestly believe that it requires systemic change in the technology industry at large, but where there is no change, we want to help people get past those hurdles. My secret weapon all along has really been my [husband](/author/mhedgpeth), our partnership really. He was the mastermind behind my efforts since he knew the industry so much more deeply than I did from both an [engineering](/posts/ten-ways-to-have-a-successful-early-software-career) and [leadership](/posts/3-steps-for-managers-to-discover-the-right-roadmap) perspective. He taught me everything I knew in the [beginning](/posts/summer-of-discovery). He had the plan, and I had the drive, grit, and muscle to get it done. Now that I’ve got some years behind me, I think that it’s unbelievable that we did that. I’m exhausted just looking back at the old blog posts and remembering how hard it all was (the work, not the blogging). Many times over the past seven years, people who know my story will ask me and Michael for help (or direct others to us), and we’re both fired up to help, but we’re also really honest with people about how the odds are not in their favor. Not everyone has the privilege that I had at my disposal—a working partner to pay the bills, a partner with an established network, racial and socioeconomic privilege, and on and on. We’re very interested in exploring how to fix the system in some small, holistic ways in order to give people the leg up that they need and deserve. Fixing capitalism is out of the scope of this endeavor, but our holistic approach does involve exploring the entire system, from jobseekers and career changers to middle-management and leadership. So stick around if you want to see what we are thinking and what our upcoming offerings will be. I’m excited about the next 7! --- # Managing Performance With Your Manager: It Goes Both Ways URL: https://hedge-ops.com/posts/managing-performance-with-your-manager-it-goes-both-ways/ Learn how to manage your manager’s performance, set clear expectations, and drive your career growth. Get actionable tips for effective 1:1s and feedback. Many organizations have a yearly performance review process where people are rated by their performance and their compensation may change as a result. For managers, this is weeks of meetings, discussions, setting selections in tooling, waiting, and culminating in a final conversation with each member of their teams about their performance and the rewards that came alongside that. I wrote in [an earlier post](/posts/how-to-manage-performance-from-low-performers-to-superstars) about the system a manager must have to manage the performance of their team. It’s _just as important_ that you manage _your manager’s_ performance and how she manages your performance. In other words, performance management goes both ways–to your team and to your manager. Here’s some advice on how I do that. ## Defining and Communicating Performance Challenges If you want to be an ideal and growing manager and leader of people, you must consistently put yourself in your boss’ shoes or even _their_ boss’s shoes. From this vantage point, you’ll think about _why_ people are doing what they are doing, and you’ll start to understand what you might do differently. Within this, if you are thinking deeply about it and growing, you’ll start to recognize gaps that your manager does not see that are affecting both her performance and yours. It’s imperative that you share this! But since every boss is different, I highly recommend the book [Crucial Conversations](https://www.amazon.com/Crucial-Conversations-Tools-Talking-Stakes/dp/1260474186/) to help you map out how you’ll do this. At the end of the day, in order for you to grow you must be able to have a difficult conversation of some sort about how your boss’ behaviors affect your performance and work together to create a working agreement on how to improve. This can’t be a spur-of-the-moment conversation around performance time, but must be within an atmosphere of trust and therefore focused on the facts and clear communication on both sides. If you don’t feel safe doing this, especially after reading Crucial Conversations, you might consider whether you will be able to grow in your role. If you think you may need to move jobs, why not try out an honest conversation and see what happens? Not everything has to be a confrontation; you could say something like “When you did X, I worry that it harmed Y and would like to learn more about this, so I can better understand where you’re coming from.” As Covey says, [seek first to understand and then to be understood](https://www.franklincovey.com/habit-5/). ## Establishing Expectations and Goals With an honest, bidirectional relationship established, it’s now time to have an honest conversation about the expectations for your role and your goals. Don’t wait for HR to prod the manager to do this. Here’s a secret: you can have this conversation whenever you want! ## Getting to Specific Conversation Don’t allow this conversation to be vague. So many times I hear people ask their manager “How am I doing,” and get a response “You’re doing well, keep doing what you’re doing.” This is not the real conversation you need to be having. While this is quite lazy of the manager, it is also _your responsibility_ to get to a specific conversation about specific feedback. Instead, to get to that conversation, follow this process: 1. Find the expectations of the role, either via the job description in the post that you applied to (or a similar one), or better yet if it’s available in the career ladder leveling document that you have. 2. Write out your own assessment of your performance based on these documents. Concretely state the expectation and your performance, for example: “This tells me that I need to mentor junior engineers. I have been talking to Julie about some of the projects she has been working on.” 3. Give your boss a week’s advance notice that you want to have this conversation during your 1:1. Send them the document beforehand. 4. When you have your 1:1, start the meeting with this topic. Don’t talk about anything else. ## Getting Solid Feedback from Good Questions This is where you face the critical phase of this. Don’t ask, “Did I do my job?” or “Does this constitute good performance?” You won’t get valuable feedback with those questions. Instead, ask better questions, like: - What else could I do that would show the organization that I’m performing at this and the next level? - Can you give me an example of a high performer that does a good job of this? How is their approach different from mine? - (About a weakness) How could I have done that differently? What future opportunity can we think about that would show you that I’m growing in that area? ## Promotion Discussion You may be in a situation where you consistently get “good” marks on your current job. If so, it’s time to find the expectations for the _next_ role you should be in and start measuring that. Even here you should have an understanding and plan of specific ways in which you can grow and improve. You never settle for “Keep doing what you’re doing” but instead continue to understand your true performance. This is especially true if you’re a high performer ready for a promotion. When you’ve mastered your own level, it’s never too early to start working on the next one! ## Regularly Communicating Results At this point, it’s important for you to regularly communicate results on your goals with your manager. Again, don’t wait for HR to mandate this! HR establishes the _minimal_ requirements necessary to keep employees engaged and to limit legal liability. They aren’t there to prescribe _the best_ outcome for you. That’s _your_ job! I recommend once a month having a sync on specific aspects of your performance. Once a quarter, you should have a deep dive into this. A great way to ensure this happens is to call it out beforehand. At the end of a 1:1, I’ll let my boss know that next time I’d like to talk about my performance and will send her any documentation I want to review beforehand. ## Conclusion The key here is to [be proactive](https://www.franklincovey.com/habit-1/) and manage your own career. No one else will manage it for you. Your manager has a million things to think about and may have to resort to “keep doing what you’re doing.” Don’t fall into that trap! When you manage your manager’s performance in a constructive and helpful way, regularly establish expectations and goals for your role, and ensure that you have regular conversations on results, you will find that your growth begins to skyrocket. You will grow at a pace that others are not because you’ll uncover the challenges and tackle them head-on. --- # Ten Ways to Have a Successful Early Software Career URL: https://hedge-ops.com/posts/ten-ways-to-have-a-successful-early-software-career/ Kickstart your software career with our top 10 tips. Learn how to focus on technology, manage your manager, and avoid pitfalls to set yourself up for success. Great careers start with the actions people take in their first four years. Some people unknowingly get sucked into the wrong course and end up in a place they don’t want. Others avoid the distractions and make the right decisions. Here are ten common characteristics of those who do it right: 1. _Focus on the technology and code._ The core expertise you’re building is that of a technologist; everything else branches off of that. So use this time to focus and deeply understand your technology and code. Don’t get distracted by meetings, promotions, workplace drama, or company strategy. You should spend the majority of your day either pairing with someone or heads down in an important, deep problem. 2. _Plan to Learn._ [Sit down for an hour or two](/posts/the-hidden-key-of-great-managers-calendar-control) without distractions and ask yourself what you need to learn to be like the people you admire on your team. Create a plan and timeline for accomplishing that. Learn how you learn. Do you learn best by taking a class? Watching a video? Reading a book? If you don’t know, find that out with some experiments, and incorporate learning into your everyday activities. Don’t wait for your manager to allow this. Do you think a professional basketball player asks permission to practice free throws? You’re a professional now; practice is included in the job! 3. _Ask, ask, ask early and often._ There will be a voice inside of you telling you that you’re clueless about this stuff and that people are going to think you’re dumb for asking questions. Don’t fall victim to that! A key source of my success is learning how to ask the right questions. Now is a great time to start! Ask questions, take notes, review your notes, and continue to ask more and more. Curiosity is the central trait that drives people to senior positions in our industry. And unfortunately, your fear will be the enemy of curiosity. If you take my advice here, and it’s still not working, you should strongly consider switching jobs. 4. _Avoid distractions (for now)._ Don’t focus on the company’s performance or making sense of what leaders say. Don’t get sucked into your colleague who hates your manager and wants to vent. Also, don’t waste time on Slack trying to include yourself in every conversation. As I say above: spend most of your day with either your headphones on coding or in a pairing session in person or on Zoom. [You should focus](https://www.amazon.com/Deep-Work-Cal-Newport-audiobook/dp/B0189PVAWY/). That will build the context needed to get to the next level. 5. _Manage your manager._ I have some sad news: most managers do a horrible job managing early-career employees. Their advice might be, “Keep doing what you’re doing.” Don’t settle for this. [Manage your manager’s expectations](/posts/finding-alignment-3-levels) and work to get aligned with them on your contributions to the team. Ask the right questions. 6. _Know the career path._ Your goal during this phase is to get to a Senior Software Engineering (or equivalent) position within the first 4-6 years. If you’re at a large company, the path from where you are to that position should be clear, with clear leveling guidelines. If you’re at a smaller company, it’s much more fluid. Whatever the case, figure out what these expectations are, so you can be aware of how your actions contribute to your promotion journey. 7. _Don’t Prematurely Optimize._ The temptation is to hurry to the next stage. You want a great career, so you want to get promoted as early as possible. Perhaps you want to figure out why no one is listening to the customers. Or figure out whether to go to that fancy new technology everyone is talking about. Don’t fall into that temptation! Instead, focus on getting things done and building context. Don’t allow yourself to be pulled away from that. 8. _Become an expert at something._ As you progress, and do the things that are above, figure out what learning and experience will make you the go-to person for your team. Sometimes this involves someone leaving the company or having the grace to give you some room. It involves you being intentional about deeply learning a section of your software, so you can speak authoritatively about it. Take a certification. Solve every bug for a particular component. Whatever the case, seek to be a deep expert on _something_. From this, you can build on more and more things. 9. _Collaborate, don’t compete._ At this stage of your career, there is plenty of room for _everyone_ to progress. Someone might be better than you at this, and leadership might promote them before you. That’s great! Celebrate it! Support them! Creating a habit of collaboration will help you down the road when _collaboration becomes the job_. So your colleagues are your teammates, and you win together. Don’t compete with them or seek to take credit. If you follow the above advice, you’ll _naturally_ succeed. Guaranteed. 10. _Have fun._ Remember why you chose this profession. This is hard work, but turning an idea into a solution becomes addicting. Remember to celebrate those moments when you learn something new, fix a bug, or implement a new feature. This work is filled with negativity (compile errors, broken tests, bug reports, production incidents)—don’t let that swallow you up. Find the fun in it, and have fun! Those who follow these guidelines are destined to have a solid career. To summarize: your early career is about building the fundamentals of creating software. With those fundamentals in place, you can decide if you go to management, be a principal engineer, or even go to sales or consulting. Let that all come to you as you build a solid foundation for your career. I’d love to [hear from you](/contact) if you’re in your early career and this resonates. I’m interested in creating a community of people who are supported to optimize their career progression. --- # Career Inflection Points: An Introduction URL: https://hedge-ops.com/posts/inflection-points-introduction/ Explore career inflection points and how they shape our professional journeys. Learn from personal experiences and gain insights on your own career. An inflection point, as you know, is the point of a curve at which a change in the direction of its curvature occurs. In your own life and career, I’m sure you can think of many inflection points that informed the direction you would take on a certain path. Life would almost certainly be different without those inflection points. Our goal here is to try to replicate some of the great things that happened in our early careers with as many people as possible. So we’re thinking deeply about what worked for us … and what didn’t. These inflection points in my career gave me both the moments of great, overwhelming challenge that led me to a pit of despair followed by those times that I was given a leg up, a push start, some timely support that gave me hope to keep pushing through. Both were needed for growth. Had I not struggled, I wouldn’t have gotten stronger, but had I not been given a boost when I was on the struggle bus, I may have not gotten out of the dip in enough time or may have lost hope. We think these situations might help people know how to navigate their own journeys. So I’m going to start a little blog series where I talk about each of the inflection points in my career that I found to be particularly meaningful. They all involve some really helpful and high-minded people that believed in me out of the goodness of their hearts. I hope you’ll follow along and be inspired by the generosity and selflessness of fellow engineers and others. ![Inflection Points Description](/article_images/2023-07-14-inflection-points-introduction/inflection-points-description.png) ## The very beginning I wrote a lot about my timeline in the [about](/about/michael) section, how I started in film and video casting, decided to stay home with kids, and then got into technology, but I’ll use this next couple of posts to point out the inflection points in those first 15ish years post-college. The stories are too good to pass up! The very first inflection point was deciding to go out of town for college. Mind you, I didn’t make the _best_ choice for college, nor was I even prepared for college. My high school was under-performing, so even though I was in honors classes, I was behind and not prepared for college level math and science classes. I chose pre-dentistry as my major, so I was swiftly kicked in the butt my very first semester with chemistry and algebra. I made mercy Cs in both of those classes (mercy because I probably deserved to fail). Turns out that you can’t expect to make good grades just for showing up like I did in high school _and_ that you can’t sleep from 3-7 AM and PM every day and maximize learning potential. Oh freshman year. The choice to go out of town for school, though, was good for me. I needed to be away and focus on what I needed and wanted for my life. I was the first in my family to graduate college, so I had no guidance really. My sister who was a grade ahead of me was at the local community college, so no one really had university advice for me. I was flying by the seat of my pants. I changed my major to undecided the very next semester, then I decided to switch to a cheaper state school the next year (after taking a semester off to save up money for a car)—my second inflection point. I was poor and had no business going into such great debt for a religious university that didn’t even have the prestige that other expensive liberal arts schools had. Going to the state school meant that all of my tuition would be paid for with the PELL grant, such a blessing. Along with that choice was the first real moment of grace that I experienced from someone’s kindness. My cousin, Cameron, who is 10 years older than me, and his wife, Laura, and two small kids were living nearby at the time for a job he had. They were my only family who lived within 300 miles of me. I often hitched rides back home with them for holidays during my freshman year. When I told them that I was transferring to the nearby state school, ten minutes from their house, they offered for me to live with them rent-free for as long as I needed in exchange for babysitting. That was huge! At first, I was reluctant. After all, Cameron was part of the successful part of the family, and I was part of the outcast family within the larger family. Would I fit in? Would they like me? I decided to get over my insecurity and accept this very generous gift they were handing me. They saw my potential, plucked me out, and gave me a leg up. It was too perfect of an opportunity to let go. I had no plan of how I was going to live otherwise. I knew tuition was covered, but living expenses were truly an afterthought. Did I mention that I was flying by the seat of my pants? I didn’t know what I was doing! I only ended up living with them for six months because I was eager to be on my own, but it was exactly what I needed to get my bearings straight. I look back on that six months as truly transformational. I loved living with them. I got to see what a healthy, functional family looked like, something I had never experienced from the inside. I got to see what successful careers in your early thirties with small kids looked like. Laura was, and is, a strong and hard-working woman who confidently tells it like it is. I loved living with her – cooking, having long conversations, and sharing lots of laughs. I learned so much about confident womanhood in the short time I lived with them. And from my cousin Cameron I saw a kind and fair but firm father and loving husband. To this day, we’re closer than cousins; I consider them like siblings. It’s hard to overstate what their contribution meant to my life. The inflection point that they pinned down created such a drastic hockey stick curve because I was at such a low point already. I had no privilege; I didn’t know what I was doing; I was lost. But by welcoming me in and sharing their lives with me, they were able to inform my goals and outlook on life in that short six months in such a powerful and meaningful way. Aren’t they inspiring! Keep your eyes open for the little Annie’s in your life! Give them a leg up if you can. Maybe it’s not letting them live with you; but I promise there are ways that you can help, even if it’s just helping them register for their first semester of college when it all seems overwhelming, or buying them groceries when they’re down and out with no other support. It will make a difference; I can testify. --- # Finding Alignment—3 Levels To Grow To The Next Level URL: https://hedge-ops.com/posts/finding-alignment-3-levels/ Explore the three levels of alignment in management - immediate execution, understanding but disagreeing, and understanding and agreeing. One of the hardest parts of being a manager is learning that management goes both ways. Yes, you are managing a team and perhaps managers of teams, but you are also managing _your_ manager. It’s common for managers to struggle with this, especially when it comes to [aligning](/posts/finding-alignment) with their managers. Your manager reaches out to you on Slack and demands that an important customer wants something done _now_. How do you align with that? There are no correct answers for every situation, but I like to think of alignments as being at three levels, from least to most desirable: 1. Just Do It—This is where without question you get your manager’s request done as quickly as possible. 2. Understand, Disagree, Do It Anyway—This is where you seek to understand and empathize with your manager about _why_ the request exists but disagree and execute anyway. 3. Understand, Agree, Do It—This is where you seek to understand and empathize with your manager and realize that if you were her, you would do that same thing. Let’s break down each of these in more detail: ## Level 1: Just Do It Sometimes when your manager comes out of nowhere with an emergency, and you haven’t built up an understanding of your manager’s issues, you’ll need to jump into action and just do things immediately. You won’t always have the luxury of understanding why or thinking about it deeply. ### Team Communication In this situation, you should tell your team the facts. For example, you’ll say “Customer X needs this thing right now and leadership has asked us to drop everything on this.” If your team pushes back that this is dumb, tell them that it very well might be, and you’ll make sure that we have a retrospective after this to figure out how to improve, but for now, we must take action. You _shouldn’t_ tell everyone that you thought it through and that “I want you to do this.” That may sound like what managers need to do, but what you’re really doing in this situation is eroding trust with your team. You’re also making yourself look like an ineffective leader to them. ### Don’t Overdo This If you consistently find yourself in this phase, you’re setting yourself up for failure. To your team, you come across as a _yes_ person who can’t think for themselves. They won’t respect you. To your management, you come across as a [non-strategic cog in the machine](https://www.amazon.com/Linchpin-Are-Indispensable-Seth-Godin/dp/1591843162/), and not ready for growth. This is counterintuitive; by giving your management _exactly what they want without question_ you are setting yourself up for career stagnation and a team that does not respect you. ### Sometimes Just Do It However, if you _only_ try to be strategic and get to the next level, you’ll get yourself in trouble. Sometimes things are _truly_ emergencies, and we don’t have time to investigate why things are the way they are. Other times you lack the context and relationship with your manager in order to know what the right questions are. And you just do it and execute. There is a time and a place for this! ## Level 2: Understand, Disagree, Do It Anyway Ideally, when requests come your way, you can have some time [to understand why](/posts/3-steps-for-managers-to-discover-the-right-roadmap) you need to take the requested action. This is preferred because it will give you credibility with your team and will enable you to drive the _right_ outcome for your organization. ### Team Communication The communication with your team in this situation is much clearer. You can say “Customer X needs this thing right now because without it, they will miss an important quarterly target, and because we have a renewal coming up for their next quarter. As a result, we need to drop everything and take action on this.” See how much better that is? You can actually field questions on this now! ### Disagreement is Healthy You might find yourself uncomfortable with disagreeing with the decision. Perhaps you thought in the situation above that you should tell the customer and sales to calm down because in six weeks you’ll deliver that release that will solve so many of these problems, and you’ll still get the sale. This is a great place to be! _When you disagree with leadership, but understand why they made the decision they made, it means that you are thinking at the next level!_ If you want to be a next-level leader, start here! ### Sometimes Values Differ Sometimes the real reason why your leadership decides what actions to take is based on values you disagree with. Perhaps they are taking this action because they are afraid, and you don’t share their fear. Or they don’t value people as much as you do. Sometimes they value people _more_ than you do. Either way, this is a great place to be because you’ve done the hard work of understanding _why_ people are doing what they are doing, and that builds empathy and helps you communicate reality to your team. ## Level 3: Understand, Agree, Do It This is the best place to be—you’re given a request, you have taken months to understand your leadership’s context on why they think the way they think, and you agree with their decision. ### Team Communication This is where you mention your agreement explicitly. “Customer X has this problem and _I_ think we need to take action on this.” You can confidently give the message and field all questions because since you agree with this, it’s _your_idea! The risk here is that you’re \_too_ confident and don’t leave space for others. Remember everyone you manage is seeking alignment on this scale just as you did. Leave room for those conversations, and seek to help your team find their own understanding of _why_. ### Don’t Overdo This If you find yourself _always_ agreeing with leadership, you are in a dangerous place. You might not be thinking of the problem deeply enough. You might not be safe to disagree with management and therefore are subconsciously adopting whatever they say. Or you might have outgrown your position. No matter what, [disagreement is healthy](https://www.amazon.com/Crucial-Conversations-Third-Talking-Stakes/dp/B09MV3818X/). Seek to find situations where you disagree with management, and challenge yourself to think deeply about this until you find what you disagree with. The best teams I have been on have been teams where the leadership has made it safe to disagree. ## Conclusion This should give you a spectrum of how to gain alignment with your management and what to say to your team when you reach different levels. Sometimes you just have to do things and don’t have time to understand. Other times you understand and agree. Sometimes it’s in the middle. Whatever the case, do the right thing for the situation, and you’ll find that you grow as a leader over time, with a team that respects you. --- # The Hidden Key of Great Managers—Calendar Control URL: https://hedge-ops.com/posts/the-hidden-key-of-great-managers-calendar-control/ Learn how to control your calendar to maximize your time, delegate effectively, and create an ideal schedule for increased productivity and growth. Almost every manager I work with faces the same problem: finding enough time in the day to get anything done. They went from the world of the individual contributor, where there were maybe a few meetings in a day, to a world where everyone seemingly wants their time in the form of a meeting all day long. They end their day exhausted with not having accomplished anything. Many managers see themselves as facilitators and therefore going to some meetings, where they have nothing to contribute, is part of the job. If they aren’t there to gain context and visibility, then what is their true value? Don’t fall into that trap! I’ve found that the more you are _intentional_ and _strategic_ with your own work and own team, the more respect, scope, and responsibility you get as a result. This includes meetings! Make a choice. Instead of being the fly on the wall at every meeting, be the manager who _delegates_ to others and has a _[process](/posts/getting-things-done-action-plan)_ by which the right information gets to the right places so that the teams you manage have real, measurable impacts. That’s how to get ahead! Here’s more on how to do that: ## Insist on Meeting Agendas The first step to being a great manager who has control over her calendar is to maximize the value of your time by insisting that there is an agenda for every meeting you go to. At first this too might seem difficult to broach. For example, a colleague creates a meeting with your whole team called _Chat about Releases_; nothing is in the meeting description. Well _of course,_ they think this is a valuable use of time; why else would they have created the meeting? It’s sad to say that actually a lot of meetings are created because people don’t want to follow your process. Or they don’t know your process. Or they are stuck in the rut of letting a meeting solve every problem. Whatever the case, it’s up to you to set the tone for you and your team: follow up with people who call the meetings and get clarity about what problem they have that they are trying to solve and what _each_ person in that meeting needs to _contribute_ to a solution. With these in place, you have an agenda! So the _Chat about Releases_ meeting becomes _BigCorp Defects in Release 9.3— Retrospective_ with an agenda of: 1. Review BigCorp defects (link to document) 2. Actions team is taking to make the next release better 3. Agreement on messaging to BigCorp and leadership That’s an agenda! Insist on it every time! With that in place, reinforce the agenda with clear action items leaving the meeting and a system in place to ensure that people execute those actions. Soon enough you’ll find almost all of your meetings become action oriented and your time spent is much more efficiently. ## Get Comfortable with Declining Meetings The next step on this journey to controlling your calendar is to _normalize rejecting or delegating meetings._ Here’s a secret: you don’t have to do everything. In fact, the term _manager_ implies that you’re _managing_ other peoples’ work, so it’s in the job description for you to delegate this to another team member. Alternatively, you can respond with, “That’s not within my scope,” and reject the meeting invite. I do this politely and openly, but I also do this very often. I reject _standing meetings_ the most often. These meet weekly, have a set, normal agenda, and are usually recorded. Usually, I have nothing to say in those meetings, or I ask a delegate who is more in tune with the standing meeting’s issues to attend. If there might be important information, I watch the recording later or even read the meeting notes if they’re shared. The key is to avoid any meeting where there is at least a 95% chance that you will take a passive role. ## Create an Ideal Schedule Now that we have the parameters in place for the types of meetings you should go to and that you are able to decline the meetings that don’t fit, the next step is to _create an ideal schedule to take strategic control over your calendar_. I utilize this through the [Full Focus Planner](https://fullfocus.co/planner/) I have used for years. Every quarter, the planner encourages you to [fill out an ideal schedule for a week](https://www.youtube.com/watch?v=ziuBkOnpsws). This allows you to focus on how you _really_ want to spend your time to meet your goals. It’s also helpful to begin categorizing your days or your day parts. For example, Wednesdays, many of us go into the office. So I schedule Wednesdays to be in the office with free time to collaborate with the people there. I also limit personal appointments to Monday afternoons, if possible, and don’t schedule 1:1s at that time. On Tuesdays, I think about one of my teams, Wednesdays with another, and Thursdays with the third. Your results may vary, but planning like this helps you think like this and make better use of your time. Once you have an ideal schedule, you can then _plan a week in advance._ This is when you apply all the rules above, _not at the last minute._ I do this on Sunday nights through the Full Focus Planner, and alongside this create goals and targets for the week. ## Conclusion Once you follow these steps, you’ll feel like you have a new lease on life and a completely different job. While before you were pushed back and forth by whatever demand was placed on you, now you are _[truly proactive](https://www.franklincovey.com/habit-1/)_ and are _[putting first things first](https://www.franklincovey.com/habit-3/)_. What’s left is to lead by example and to share this with your team members. Once everyone is intentional about their calendars, you unlock a whole new level of productivity for everyone. And this level of productivity will open up so many opportunities for your growth as a leader. You might ask after reading this post, you might ask that the hardest part of managing your schedule is managing _your boss’ expectations of you_ to be at certain meetings. That will be the subject of the next post. --- # How to Manage Performance from Low Performers to Superstars URL: https://hedge-ops.com/posts/how-to-manage-performance-from-low-performers-to-superstars/ Learn the three-step process I use to manage the performance of my team. This creates the right outcome for everyone, from low performers to superstars. The last year and a half as a manager has been quite the whirlwind. In 2021 and early 2022, the industry experienced unprecedented levels of attrition. We didn’t know how we were going to keep anyone long-term. And then during the second half of 2022 and up until the time of this writing (mid- 2023), the industry has been laying off tech workers at a rate we haven’t seen in quite some time. On top of this, some tech companies are [lowering their stock compensation](https://www.seattletimes.com/business/amazon-plans-to-reduce-stock-awards-for-employees-as-of-2025/) and [limiting raises](https://www.cnbc.com/2023/05/10/microsoft-skips-salary-increases-for-full-time-employees-this-year.html). If this season has taught me anything it’s the value and importance of solid performance management. Performance management is so easy for managers to overlook. In the good times, we want to keep things vague because we want to keep people at our company. In the bad times, we surprise people with tough conversations because we haven’t properly managed their performance. Over the past few years, this is how I manage the performance of everyone, from the low performers to the superstars: ## Create a Performance Log First, _keep a log of the good, bad, and ugly of the employee’s performance._ Managers so often fall into the trap of _[recency bias](https://en.wikipedia.org/wiki/Recency_bias)_, where when they are asked for the performance rating of one of their employees, they simply provide a top-of-mind assessment. That is not only unacceptable, but it’s completely unfair to the employee. I had a friend who didn’t receive the yearly performance rating that they expected, and when approaching their senior leader about it, was given feedback about something that had happened a few weeks prior. That’s a clear miss on performance management by this friend’s leadership. Let’s not do that. Instead, create a doc that is labeled `$PERSON—performance log—$YEAR`. Then add entries like: _6/2/2023 Heard from product that Jane was engaging with them and defining the project that will be critical to success. Shows next level effort._ or _6/3/2023 In a meeting about the new feature release, Patrick was argumentative and had to be right. Will follow up with him in our 1:1._ It doesn’t have to be complicated. Instead, keep it simple, include screenshots, and you will thank yourself later when review time rolls around. ## Agree on Growth Opportunities Second, _challenge everyone related to growth opportunities._ It’s easy for a 1:1 to become a therapy session where the manager listens and commiserates with the complaints of the employee. Sometimes that’s warranted, but also sometimes _every_ employee needs to be challenged on how they can improve. It seems scary at first, but your people will appreciate it. I had one employee who told me for years that their management kept telling them, “Keep doing what you’re doing.” This person was struggling and didn’t have help to understand how they could grow. When they became my direct report, I challenged them in the ways they should grow. They were eager for [this kind of leadership](/posts/margin-for-leadership), and they listened and grew quickly. You might think that you have a high performer who doesn’t need to be challenged. Please know, reader, this person needs to be challenged the most! If you are unable to challenge your high performer with advice on how to get to the next level, perhaps you should consider if you are the right person to be their manager. Go ask for advice from people more senior to you, and learn what it takes to manage a superstar. In short, _everyone_ should be challenged on their performance. Don’t wait until it’s too late for a low performer who gets surprised by unwelcome news or a career-limiting event. Also, don’t ignore your top performers because you have fallen victim to the “keep doing what you’re doing” syndrome. ## Track Progress Finally, _formally track and celebrate progress and provide accountability for lack of progress._ This part is so simple but often overlooked. Once you have a performance log and have been clear about performance challenges, take the final and important step of _tracking it!_ Bring them up in your 1:1 and create a plan together to help the employee perform at the next level. This is the hardest part of management because on the one hand, we want everyone to succeed and give low performers the benefit of the doubt, and on the other hand we feel like we would be lost without our high performers and fall victim to never having hard conversations with them. I have found both fears to be unfounded. I had managed one person who was struggling in their job and ended up leaving the company. Two years later they had lunch with me and thanked me for the tough conversations and actions that led to their departure. They had a great new job with something that interested them, and the job separation was the wake-up call they needed to get themselves in the right place. I managed another person who was a high performer that I pushed to grow, and they ended up being critical to a massive cloud migration project before then transitioning to their dream job at a much higher salary. I’m pretty confident if I were to adopt the “keep doing what you’re doing” mantra because I was afraid they would get offended by being challenged, they wouldn’t have been able to grow as they did. This person really appreciated that we had a _real_ performance management relationship, despite all the tough conversations. ## Conclusion In conclusion, make performance management the cornerstone of your weekly management activities. Keep a journal, have an open conversation about performance, and help your people grow. In a very short time, you’ll see the magic of a performing, engaged, happy team who appreciates and value your leadership. The fears of not bringing up performance management end up being mere mirages. --- # 3 Steps for Managers to Discover the Right Roadmap URL: https://hedge-ops.com/posts/3-steps-for-managers-to-discover-the-right-roadmap/ How to discover the right roadmap for your team Engineering managers often struggle with and overlook the effort needed to discover the right work the team needs to do. Solid roadmaps can make or break a team’s overall effectiveness. However, managers can tend to avoid this work because it’s so disruptive. Getting this right keeps senior leadership from seeing you as a _bus driver_ leader who takes orders and gets stuff done but isn’t strategic and impactful enough for the next level. On the surface, it may surprise some readers that this is even an important aspect of engineering management. Some may be overwhelmed by a steady stream of demands and escalations. We know what is needed…everything! Other managers might find themselves with a product manager who makes the roadmap demands _very clear_ and performance is measured by how well the team delivers on those goals. So why focus on this? As a manager, you’re the one who is responsible for the outcomes of your team. You need to hire and retain great talent, deliver the work, and ensure your customers are successful. On top of that, you need to do all of this while meeting your own personal and professional goals. This is the job! Here’s how I train people to do the job (advice I follow myself): ## Ask Why Until You Understand Everything First, keep asking why until you fully understand the contents and priorities of your roadmap. Teams usually see their work as incredibly obvious, and so many managers see a full rationalization of their roadmap as _someone else’s job._ An overloaded team needs radical prioritization so there is a _why_ there for everything. They must ask which _why_ has the largest impact. They then need to explain to the others why _their_ request isn’t the highest impact request. A team that is on a roadmap with a product manager might have a very strategic _why_. If the product manager is doing their job, they should be able to distill this strategic direction into data. For example, they might be pushing for a completely new feature they say will open up a new market segment. Many times in my career, however, I am surprised by the extent to which some of these roadmap items can be based on something someone important said in a meeting. For example, when asking _why_ over and over again, we find out that the request for the new feature that will open up a new market segment came from a meeting with the COO who thought the idea might be a good direction. While it may be a great idea, that’s not good enough; we need data! [So ask why](https://www.amazon.com/Start-Why-Leaders-Inspire-Everyone/dp/1591846447), and keep asking why until you, as the owner of the team, can rationally support the prioritization of your projects and backlog. Don’t succumb to the temptation to say, “We’re doing this feature because a VP wanted it,” or, “[This is simply the right thing to do](/posts/the-inferior-right-way/).” You have to ask _why_ that VP wants that thing and be able to justify it. ## Discover the Team’s Problems Second, understand the team’s problems and how they relate to the roadmap. I became a manager of a team that was inundated with alerts. Dozens or even hundreds of alerts per day came across their paging system or Slack channel, and when I looked at their roadmap, alert management was nowhere to be found! So I sought to deeply understand that problem and put the solution on our roadmap as the top priority. What we were doing was not sustainable. I worked hard to include and understand all stakeholders’ priorities. The stakeholders were all remarkably supportive of making this the priority. I haven’t always been this lucky, but when I have done step one to deeply understand the basis of the team’s roadmap, I can then justify the priority of addressing the team’s problems within that context. Then we mix the two priority streams and compromise. Once we did this exercise with that team, [we were able to reduce alerts by 80% in six weeks](https://medium.com/splunk-engineering/eliminating-alert-fatigue-9-ways-one-team-reduced-alerts-by-80-in-a-month-3cc23362b570). Because we made it a priority and I knew the rest of the roadmap, I was able to justify it to stakeholders. ## Supercharge with a Motivated Team Finally, understand which items on the roadmap motivate each team member. Many managers gloss over the reality that motivation multiplies effectiveness and velocity. If a member of my team believes that a particular project will help their career, and they are motivated for career growth, it would be foolish of me to assign that project to someone else. I’ve seen three-month projects take three weeks because the person was motivated. I’ve seen two-week projects take two months because the person was not motivated. As a manager, I prioritize aligning people’s interests with the roadmap. If I’ve done a good job understanding _why_ the roadmap items exist and am I including the real problems the team is having in the roadmap, this is not generally a difficult thing. ## Conclusion These simple steps will supercharge your team’s work by making it impactful, strategic, and [aligned](/posts/finding-alignment/) with your team members. When this happens, magic happens. Teams who have gone through this process deliver multiples of what other teams deliver, and it makes me a stronger, more strategic leader, ready for more. I’d love to hear about your experience. --- # My New Friend, Cinc-Auditor URL: https://hedge-ops.com/posts/my-new-friend-cinc-auditor/ My journey with cinc-auditor for CI/CD pipelines, replacing InSpec. Discover how she navigated dependency issues using PackageCloud over RubyGems. So I’m making a CI/CD pipeline to create a simple base image to use (the image is not relevant to the story, just so you know), and I want to validate the configuration scripts before I build the image, right? I mean, y’all know I love some [test driven development that I turn into integration tests](/posts/red-green-refactor). And y’all know I love seeing passing green checkmarks. It’s like my favorite thing. And because I don’t have the need for a Chef license, as I only need to run this for locally for my CI/CD process, I just need a little, light-weight tool to run my validation tests. That’s where [InSpec](https://community.chef.io/tools/chef-inspec/) used to come in handy, but now you need to accept a license agreement to run InSpec, and I’m not a fan of going down that path, but what do I do? I freaking love InSpec, [y’all know that](https://www.hedge-ops.com.com/posts/categories/inspec). Meet my new friend, [`cinc-auditor`](https://cinc.sh/start/auditor/). Now, it’s been out for a while, but, because I was at a place with a Chef license, I had no use for it until now (save for a proof of concept I did a while back). As they state on their [website](https://cinc.sh/about/): > Cinc is a recursive acronym for CINC Is Not Chef > The Cinc project is in no way formally affiliated or associated with Chef Software Inc. > Is Cinc compatible with upstream products ? > Yes, it’s the same code as the original products, only branding is changed. And no license is needed, so it’s just what I need. So right now I have an integration testing pipeline that basically does this: ```bash # build a docker image from a script of base image config (Dockerfile runs a bash sript) $ docker build -t baseimage:test . # run the image with all the config on it $ docker run -d -i --name baseimage baseimage:test # run InSpec, no wait, cinc-auditor against the image/container I just built $ bunde exec cinc-auditor exec ./test/integration/my_config -t docker://baseimage # make sure the packer config is valid $ packer validate ./Packerfile.pkr.hcl ``` And I _had_ a simple `Gemfile` that looked like this: ```ruby # spoiler alert - this Gemfile didn't work source 'https://rubygems.org' ruby '2.6.6' gem 'rake' source "https://packagecloud.io/cinc-project/stable" do gem "cinc-auditor-bin" end ``` You can see there that `cinc-auditor` is pulled from the [Package Cloud](https://packagecloud.io) manager, not [RubyGems](https://rubygems.org), so we grab have `bundler` it from there. But I was having an annoying issue where `bundler` couldn’t find the `chef-utils` gem (a dependency of the `cinc-auditor` gem) in the RubyGems hosting server, and it was telling me: ```text Could not find chef-config-16.12.3 in any of the sources ``` And I knew it was a lie! I was so bothered! I could see it [_right there_](https://rubygems.org/gems/chef-utils)! So what gives? So then I found the answer [here](https://packagecloud.io/cinc-project/stable/install#bundler) in the comments. > Note: It’s recommended you add the official [source](https://rubygems.org), unless your packagecloud repository can meet > all the dependency requirements in the Gemfile. Okay, admittedly that doesn’t really tell me anything I didn’t already know, but it caused me to assume that Cinc wants you to pull all the dependencies that it can from the PackageCloud manager, not RubyGems. So I changed my `Gemfile` to look like this, and voilà, it worked. I was able to pull in all the dependencies. ```ruby ruby '2.6.6' source 'https://rubygems.org' do gem 'rake' end source 'https://packagecloud.io/cinc-project/stable' do gem 'chef-config' gem 'chef-utils' gem 'cinc-auditor-bin' gem 'inspec' gem 'inspec-core' end ``` TL;DR: The other gems being pulled from Package Cloud are all dependencies of `cinc-auditor-bin`, so we pull them from PackageCloud and not RubyGems. _Hope this helps!_ --- # 2020 Year in Review URL: https://hedge-ops.com/posts/2020-year-in-review/ Our year in review post talking through how we adapted to the pandemic, migrated to Azure, and helped restaurants stay\ open. It’s been a couple of years since I regularly wrote on this blog, and wow, a lot has changed in that time. In early 2017, I took on an Executive Director of Cloud Engineering role at [NCR](https://www.ncr.com) for [Hospitality](https://www.ncr-hospitality.com/en/), which shifted my focus from being a Cloud Engineering Architect, which focused on culture and technology to achieve transformational outcomes our business needed to get to the next level, to an executive function over a four region global organization of 70–100 people. This was quite a growth challenge for me, and I hope you understand that I didn’t have it in me to write about it. In my new role my team has accomplished a lot that we’re proud of. First, we migrated our entire product portfolio in three international regions to Azure, using Infrastructure as Code and [Chef](/posts/tags/chef). The vast investments we made that you can read about [earlier](/posts/policyfiles) were put to good use, and it was great to see the investment pay off so handsomely. Second, in 2020 we managed a 4X increase in ordering traffic almost overnight when the pandemic hit and people were forced to utilize digital channels for ordering food. The pandemic has hit the restaurant industry hard, but the silver lining in that has been seeing our [NCR Online Ordering](https://www.ncr.com/restaurants/mobile-online-ordering) product help struggling restaurants stay open through the crisis. The final element of my role that I’m proud of is the extensive coaching I’ve started doing with people both inside and outside my team. I’ve reached a place where the years of problems, reading, growing, and thinking about career development have yielded some good, transformational advice for people wanting to take their careers to the next level. I find it incredibly rewarding, and I want to write about it. Most of these posts will come from those sessions, where an insight is what is needed to take someone’s mentality to the next level. On a personal level, our family is very happy in Boulder; we feel like we have found our place in the world. Late last year, [Annie](/about/annie) started an engineering role at [HashiCorp](https://www.hashicorp.com), a company that impresses us both. Our kids are getting to the secondary education phase where they know everything. The pandemic has presented for us challenges as parents and a family that greatly surpass anything we have ever experienced. But we have each other, and we are making it through the journey. Thanks for reading. I’m hopeful the insights I’ve gained in the past couple of years can be as helpful for you as they are to me and those I coach. --- # End of the First Chapter URL: https://hedge-ops.com/posts/end-of-first-chapter/ Annie reflects on her transformative journey at 10th Magnitude to HashiCorp. A heartfelt farewell to one chapter and an eager welcome to the next. I have been very fortunate to work in my current job as a Cloud Automation Engineer for a leading Azure consultancy and to experience the accelerated growth that only comes from working at such a pace. [10th Magnitude](https://www.10thmagnitude.com/) (we say _10M_ internally) is now almost 150 employees, and I was hired when we were still in the 30–40 person range. I have been there for four years and have seen immense growth both in my own career, as a company, and as a person. I am so proud of what we built together and that I grew a strong technical foundation there. I am extremely grateful to my colleagues and CEO, Alex Brown, at 10 TH Magnitude for taking a chance on me [4 years ago](/posts/leaning-in) and fostering my growth ever since. Every little boost counts when you’re onboarding someone new, and I remember everyone who helped along the way. I extend my heartfelt thanks. These past four years have been some of the best of my life, and I will hold the memories dear. I would not be where I am today without 10 M. Alas, while I was eager to see 10M and the team I manage through a successful transition to [Cognizant](https://www.cognizant.com/) and to help build a strong Microsoft Business Group, a job posting came across my radar that looked so perfect for me, almost like it was written for me. On a hopeful whim, I applied, interviewed, and then accepted an offer as a Test Infrastructure Engineer on the [Terraform Enterprise](https://www.terraform.io/docs/enterprise/index.html) team at [HashiCorp](https://www.hashicorp.com/) focusing on building a platform that enables infrastructure testing and CI/CD, building automation that runs tests and manages its infrastructure. When I saw [`kitchen-terraform`](/posts/kitchen-terraform-and-inspec) knowledge as a qualification in the job posting, I was sold. (I may have told one or two of the interview panel that I gave a [HashiTalk](https://youtu.be/q1Vx02N1_vo) on that this year.) This new position so perfectly lines up with my career objectives as I’ve been wanting to move into software development while leveraging my existing infrastructure and CI/CD skills, which was a challenging feat at 10 M since my role had little exposure to software opportunities. It is really satisfying that I will get to see my goals to fruition. I start at HashiCorp on October 12, and I am over the moon. I cannot wait to get started with this team of extremely talented engineers! (Thank you, TFE team, for letting me join your ranks!) As one really beautiful chapter ends, another very exciting one begins. --- # Terraform + Kitchen + InSpec URL: https://hedge-ops.com/posts/kitchen-terraform-and-inspec/ Explore the integration of Terraform, Kitchen, and InSpec for efficient testing of Terraform deployments for a smoother development workflow. _Disclaimer:_ I like for my blog posts to be pretty basic so that you can pick up a new skill without knowing a ton of background, but this post assumes that you know about [InSpec](/posts/inspec-basics-11), [Terraform](/posts/terraform-and-azure), and [Test Kitchen](/posts/red-green-refactor). It also assumes that you know how to [call a Terraform module from another module](https://www.terraform.io/docs/configuration/modules.html) and that you have knowledge of the [kitchen-terraform](https://github.com/newcontext-oss/kitchen-terraform) gem. 1. [So what’s the problem](/posts/kitchen-terraform-and-inspec#so-whats-the-problem) 2. [How to do it in Test Kitchen](/posts/kitchen-terraform-and-inspec#how-to-do-it-in-test-kitchen) 3. [Testing, though](/posts/kitchen-terraform-and-inspec#testing-though) 4. [Concluding Thoughts](/posts/kitchen-terraform-and-inspec#concluding-thoughts) ## So what’s the problem I want to test my Terraform deployments while I’m in the process of development. I had long been frustrated with a Terraform development testing strategy that leveraged InSpec and that I thought would be worthwhile. I have always seen the value in running an InSpec profile after a Terraform deployment to test, so I had started doing that, like I [showed you here](/posts/inspec-basics-11). I had heard about [Test Kitchen for Terraform](https://github.com/newcontext-oss/kitchen-terraform) (the `kitchen-terraform` gem) and wanted to use it, but I found it cludgy and thought that the test module was too abstracted from the actual Terraform module you’re developing. Plus, I didn’t find that it gave me anything new from simply running an InSpec profile after a Terraform run. ## InSpec as a `null_resource` / `local_exec` I started trying to develop Terraform modules using that testing strategy above, and I found it to be slow and cumbersome. Running InSpec after Terraform is nice for validation of provisioning, but when you have to run your entire `terraform apply` before seeing your InSpec output while you’re currently developing your module and tests is not fun. What you would do is what I outlined in this [post](/posts/inspec-basics-11). And if you’re wanting to validate both resource provisioning and vm configuration, then you’d use a [null_resource](https://www.terraform.io/docs/providers/null/resource.html) with multiple InSpec commands in a [local_exec](https://www.terraform.io/docs/provisioners/local-exec.html) command. It would look something like: ```ruby resource "null_resource" "inspec" { provisioner "local-exec" { command = < - name: azure # this session will target the Azure subscription backend: azure controls: - example-azure-resources # this looks for that list of control names in the profile in test/integration/ platforms: - name: terraform suites: - name: example-test ``` When you run `kitchen verify` it will run two separate InSpec sessions for each name in `systems`. I _love_ this. ## Concluding Thoughts This is a really cool tool, although, I’ve heard that [Terratest](https://github.com/gruntwork-io/terratest) is the preferred testing strategy of Terraform and Azure. And if you Google _terratest vs inspec_ you’ll see some of the arguments. But here’s my two cents—if you’re not testing at all because the barrier to entry for Terratest is too high, and you already know kitchen and InSpec because of Chef cookbook development, then by all means, just use InSpec. If it starts not working for you anymore, then sure, go use POC Terratest to see if it’s worth learning. My team, however, procrastinated because we wanted to make sure we were implementing the best testing strategy, and _perfect_ got in the way of _good enough_. I honestly don’t know which is _better_ because I haven’t used Terratest, but I do know that my life just got a _lot_ easier for having implemented Test Kitchen and InSpec in my Terraform development. --- # Terraform + Azure + WinRM URL: https://hedge-ops.com/posts/terraform-and-winrm/ How to set up a Windows VM in Terraform that joins a domain domain and allows WinRM access. Addresses Strict Group Policy issues. Walk with me for a moment if you will. Let’s say you need to spin up a Windows 2016 node in Terraform that has to join the Active Directory domain. And then you need to be able to WinRM into that node during your Terraform run, because let’s say you need to add a `remote_exec` provisioner that does something that you can only do as a domain account user on the domain, and it has to happen within Terraform for whatever reason. Let’s also say that your Group Policy is super strict, and there’s no changing it. ## Acceptance Criteria Be able to WinRM into a Windows Server 2016 with Terraform from a Shared Image Gallery image ## Challenges 1. The node being provisioned needs to be on the domain. 2. There is an Active Directory Group Policy requiring that WinRM be authorized via Kerberos or NTLM 3. Only a domain account user can make the request to the CA 4. You have to WinRM over HTTPS as a domain account user. ## TL;DR Steps 1. Create your virtual machine 2. Join the domain 3. Run a custom script extension that does all the work 4. Now you can WinRM ```hcl resource "azurerm_virtual_machine" "self" {} resource "azurerm_virtual_machine_extension" "join-domain" {} resource "azurerm_virtual_machine_extension" "custom-script" {} resource "null_resource" "remote_exec" {} ``` ## The wordy instructions So let’s talk about this…I’ll assume you’ve already created the first two steps (see TL;DR above) in Terraform. Step three is where we’ll hang out for a bit. The way you configure WinRM to run over HTTPS is by [importing a certificate](https://www.thewindowsclub.com/manage-trusted-root-certificates-windows) and then creating a _WinRM listener_ that is authenticated by that certificate. Assuming you’ve gotten your certificate, all you do for that is add this line to your `winrm config`, and you can add it simply by running this in Powershell: ```powershell # Get the thumbprint of the certificate first. You may have to add more criteria to narrow it down if there are others w/hostname in the name. $thumbprint = (Get-ChildItem -Path Cert:\LocalMachine\My | Where-Object {$_.Subject -match "$hostname").Thumbprint # Create a listener that uses that thumbprint. winrm create winrm/config/Listener?Address=IP:$ip+Transport=HTTPS "@{Hostname=`"$hostname`"; CertificateThumbprint=`"$thumbprint`"}" ``` Great, right? Let’s get that certificate and get moving. Oh, wait…you can’t just use a random self-signed certificate spun up in Key Vault. No, your Group Policy mandates that the certificate be signed by the Certificate Authority (CA) and that the CA be your company, let’s call it _Fireside, Inc_. Okay, so you’ll need to request a certificate from Fireside, Inc. with a Powershell script like [this](https://github.com/J0F3/PowerShell/blob/master/Request-Certificate.ps1) or [this](https://4sysops.com/archives/create-a-certificate-request-with-powershell). Oh, but only a domain account user can make the request to the CA (per the Group Policy). So how do I make the request to the CA as a domain user if Terraform only runs as the local user I just created? Well, this is tricky. You _can_ run as another user, but we have to do some work to get there first given the constraints of your AD Group Policy. You will have to run in an elevated shell, which Terraform doesn’t do on its own, so let’s see how we can make this happen for you. ## How to run in an elevated shell You want to run as the local admin (non-domain account) that has permission to run as a domain user with its credentials, but in order to do that you need to be in an elevated shell. For that we go to none other than the go-to-Windows-WinRM-guru, [Matt Wrock](http://www.hurryupandwait.io/). In an `azurerm_virtual_machine_extension` which runs as the non-domain local admin user you’ll call [Matt Wrock’s Powershell script](https://github.com/WinRb/winrm-elevated/blob/master/lib/winrm-elevated/scripts/elevated_shell.ps1) called `elevated_shell.ps1`. (He created this script as part of a gem called `winrm-elevated`, which you can also use, but we didn’t.) There is a parameter in that script called `$script` which is the script that you want run in the elevated shell. You may need to add your domain account user at this point, so in the beginning of Matt’s script go ahead and add a one-liner to add your domain user to the administrators group on the machine. Then the script creates a task to allow you to run `$script` as the elevated shell which allows you to run as the domain user. As long as that domain user is in the Administrator’s group on the machine you are provisioning, it should have the required access rights. Your `$script` parameter will be another script that you create called `setupWinRm.ps1` that requests a certificate from the Certificate Authority (CA) as the domain user. Then it will configure WinRM for HTTPS on `5986` with that certificate and opened the firewall for HTTPS. That process enables WinRM for HTTPS through Kerberos or NTLM authentication. Your Terraform block will look something like this: ```go resource "azurerm_virtual_machine_extension" "custom-script" { # < all the arguments here > settings = <HTTPS with a `remote_exec` provisioner or whatever you need. ![You Configured WinRM Cookie](/article_images/2019-04-17-terraform-and-winrm/winrm.png) _Great, so problem solved, right?_ Almost. Your DNS entry may not become available on the DNS servers for a while, making authentication with your DNS name not possible until the entry is set. It’s possible that replication from the DNS server to others takes about 15 minutes and from the office to Azure is another 15 minutes. You could try resolving the DNS name of the new VM by running a Powershell command to do a force lookup of the DNS by using your internal DNS servers directly. Those servers should basically give you a result immediately. If that doesn’t work, as a last resort, you can simply add some functionality to our `remote_exec`script that adds the DNS entry to the provisioner’s hosts file (and clean it up afterward). _Why shouldn’t I just use Terraform’s suggested method for enabling WinRm over HTTPS?_ Tombuildsstuff created an [excellent example](https://github.com/terraform-providers/terraform-provider-azurerm/tree/master/examples/virtual-machines/provisioners/windows) which creates a new certificate in Key Vault, installs it on the node being provisioned, and configures WinRm during VMprovisioning using that certificate to create the HTTPS WinRM listener during VM provisioning. However, again, check your Group Policy to see if it allows WinRm on a certificate that’s not issued by your domain. If you can’t request a certificate unless you’re on the domain, then you have a little chicken and egg problem. _Why wouldn’t I just use the stock gallery image that has WinRM configured already?_ You can’t configure WinRM over HTTPS this way, so it’s less secure. It _is_ an option, just not very attractive. It also doesn’t follow most people’s standards of using images, like the Shared Image Gallery in Azure with Packer-built images. ## Concluding Thoughts Terraform doesn’t want to replace a pipeline tool (Jenkins) or a configuration management tool (Chef), and we shouldn’t try to make it. When we try to make tools do things they weren’t made to do, we get frustrated pretty quickly. That said, use with caution and use your best judgment. --- # Azure’s Managed Identity in Test Kitchen URL: https://hedge-ops.com/posts/managed-service-id-in-kitchen/ Explore how Azure’s Managed Identity enhances security in Test Kitchen by replacing client secrets with identities to your kitchen nodes for easier testing. I’m a big fan of Test Kitchen for testing Chef, and I really like the `kitchen-azurerm` driver. I started my client with it two years ago, and they’re using it for all of their cookbook CI/CD now. It’s fantastic. However, we’ve had a little nagging problem ever since we started using it: what to do with that darn client secret of the service principal. We had been saving it as an environment variable both on our workstations and on Jenkins, but you can see why that’s not desirable—too easy to let it lose out into the wild. Last fall, Microsoft introduced [Azure Managed Identities](https://docs.microsoft.com/en-us/azure/active-directory/managed-identities-azure-resources/overview). In its documentation, they outline our problem exactly: > A common challenge when building cloud applications is how to manage the credentials in your code for authenticating > to cloud services. Keeping the credentials secure is an important task. Ideally, the credentials never appear on > developer workstations and aren’t checked into source control. Azure Key Vault provides a way to securely store > credentials, secrets, and other keys, but your code has to authenticate to Key Vault to retrieve them. To solve this, they created managed identities. Basically, you create a _user-assigned managed identity_ in your subscription as a stand-alone resource. From there, Azure assigns that resource an Active Directory identity - kind of like creating a service principal. But then, unlike a service principal that you use _on_ a machine, you assign this identity _to_ a machine, and now that machine _has_ all the permissions assigned to the managed identity. I love this. I think it’s so convenient. Problem solved, right? Oh, but how can I assign an identity to my test kitchen nodes? Well, you couldn’t until recently when [zanecodes](https://github.com/zanecodes) [added its functionality](https://github.com/test-kitchen/kitchen-azurerm/commit/22bc172e415ec07c25f9461d9047513359c61866) to the [kitchen-azurerm](https://github.com/test-kitchen/kitchen-azurerm#kitchenyml-example-10---enabling-managed-service-identities) driver. Now, all you have to do is create a Test Kitchen identity resource in your subscription with all the permissions that it needs, nothing less, nothing more. And then add that one little line `user_assigned_identities` to the driver section of the `.kitchen.yml` of your cookbook. ```yaml driver: name: azurerm subscription_id: "555-your-sub-id-here-555" location: "Central US" machine_size: "Standard_D2_V2" image_urn: MicrosoftWindowsServer:WindowsServer:2016-Datacenter:latest user_assigned_identities: - /subscriptions/555-your-sub-id-here-555/resourcegroups/test_kitchen_stuff/providers/Microsoft.ManagedIdentity/userAssignedIdentities/test-kitchen-identity ``` And you can remove that dreaded client secret from your environment variables! Yay for security! --- # InSpec Basics: Day 11 - Validating Azure Resources with InSpec Azure URL: https://hedge-ops.com/posts/inspec-basics-11/ Discover how to use InSpec to scan and validate your Azure resources. A step-by-step guide on how to use the InSpec Azure resource pack to ensure compliance. Up until InSpec 2.0, you could only use InSpec to scan actual infrastructure. When resources became available in InSpec to scan cloud subscriptions, I was thrilled. There are a million and one reasons you’d want to take stock of your Azure resources. Whether you’re trying to validate that your ARM template or Terraform script did what it said it was going to do, or you have compliance standards that you have to audit, or you just want to make sure that you don’t write over anything before a deployment, the [`inspec-azure`](https://github.com/inspec/inspec-azure) resource pack is a great tool for this. But first, if you’ve missed out on any of my tutorials, you can find them here: - Day 1: [Hello World](/posts/inspec-basics-1) - Day 2: [Command Resource](/posts/inspec-basics-2) - Day 3: [File Resource](/posts/inspec-basics-3) - Day 4: [Custom Matchers](/posts/inspec-basics-4) - Day 5: [Creating a Profile](/posts/inspec-basics-5) - Day 6: [Ways to Run It and Places to Store It](/posts/inspec-basics-6) - Day 7: [How to Inherit a Profile from Chef Compliance Server](/posts/inspec-basics-7) - Day 8: [Regular Expressions](/posts/inspec-basics-8) - Day 9: [Attributes](/posts/inspec-basics-9) - Day 10: [Attributes with Environment Variables](/posts/inspec-basics-10) ## Why and How If you like to skip ahead, feel free: 1. [What you are going to do with InSpec in this tutorial](/posts/inspec-basics-11#what-you-are-going-to-do-with-inspec-in-this-tutorial) 2. [Why do I need to validate my Azure subscriptions?](/posts/inspec-basics-11#why-do-i-need-to-validate-my-azure-subscriptions) 3. [Prerequisites](/posts/inspec-basics-11#prerequisites) 4. [InSpec Azure Resource Pack](/posts/inspec-basics-11#inspec-azure-resouce-pack) 5. [Red - write a failing test](/posts/inspec-basics-11#red-write-a-failing-test) 6. [Green - make the tests pass with Terraform](/posts/inspec-basics-11#green-make-the-tests-pass-with-terraform) 7. [Concluding Thoughts](/posts/inspec-basics-11#concluding-thoughts) ## What you are going to do with InSpec in this tutorial 1. You will run InSpec both locally and from git to test your Azure subscription in an effort to validate that it is in the state in which it is expected to be in as defined by the InSpec profile. 2. You will use Terraform to create the missing resources and validate their provisioning with your InSpec profile. ## Why do I need to validate my Azure subscriptions How many times have you run `terraform plan` and then `terraform apply` fails right after that for whatever reason. `terraform plan` is fine for development when you need a quick confirmation of what’s already deployed, but what if someone coded something incorrectly, maybe changing an important network security group? Is anyone auditing the subscription that closely? Before you run a `terraform apply`, what if you had an InSpec profile you could run against your Azure subscription to validate the state of your resources? What if you could define the desired state of your subscription an InSpec profile and validate it without actually changing anything? And what if you could validate this whenever you want to ensure that the resources haven’t changed? That’s really cool, don’t you think? Have you ever used Chef’s [`why-run`](https://blog.chef.io/2018/03/14/why-why-run-mode-is-considered-harmful/)? Basically, it’s a command that you can run that tells you which Chef resources would change or converge based on your changes and the current state of the node without actually running anything. Sure, you might run it during development to see what happens, but would you ever use this for your compliance audits? Of course not; that’s dumb. In the same vein, you’d never simply use `terraform plan` to audit what’s in your Azure subscription. Another scenario—what if you have certain config that you want in all of your Azure subscriptions? How are you validating that? Let’s use the network security group example again. What if all of your subscriptions required the same rules? Wouldn’t it be nice to just run the same InSpec profile against all of them with one fell swoop? Okay, if you’re convinced that this is a worthwhile pursuit, then read ahead. ## Prerequisites Now, before we start, let’s get some stuff in order. You’re going to need the following: - InSpec is [installed](https://www.inspec.io/downloads/) - An Azure service principal with contributor rights to your Azure subscription - A `.azure/credentials` file in your home directory ( see [_Azure Platform Support in InSpec_](https://www.inspec.io/docs/reference/platforms/)) - Terraform is [installed](https://www.terraform.io/downloads.html) If you haven’t worked with an Azure service principal before, go to the link above and follow the direction for _Setting up Azure credentials for InSpec_ and _Setting up the Azure Credentials File_ exactly. It can be pretty frustrating if you set it up incorrectly, so follow the directions carefully. When you think you’re finished, validate that your service principal is set up properly by trying to make a few calls to your Azure subscription with Azure CLI or Powershell. Both instructions will tell you how to log in on the command line with your service principal. If you want to further validate that your service principal can see your resources, then look up some commands such as [`az vm list`](https://docs.microsoft.com/en-us/cli/azure/vm?view=azure-cli-latest#az-vm-list) or [`Get-AzureRM`](https://docs.microsoft.com/en-us/powershell/module/azurerm.compute/get-azurermvm?view=azurermps-6.13.0) and try them out. Just be careful if you’re not familiar with interacting with your Azure subscription from the command line; don’t go deleting stuff you’re not supposed to be deleting. ## Inspec Azure Resource Pack So honestly, if you just set up your credentials, then the hard part is over. If you’ve used InSpec before, then you’re smooth sailing from here. If not, then follow along. The first thing we need to do is create an InSpec profile, so if you remember how to create one, then do that and make sure it’s committed to git. If you don’t remember, then follow this [quick tutorial](/posts/inspec-basics-5) to set one up. In order to validate Azure resources, we’re now going to put the inspec-azure resource pack to use so that we can run our automated tests against Azure. To do that, all we have to do is tell the InSpec profile to depend on the `inspec-azure` resource pack. To do that, all we need to do is add a few lines to the `inspec.yml` file in your profile. Open up the InSpec profile that you just created in your editor of choice (mine’s Visual Studio Code), and add these lines to the end of your `inspec.yml`: ```yaml depends: - name: inspec-azure url: https://github.com/inspec/inspec-azure/archive/master.tar.gz supports: platform: azure ``` Remember that yaml is white-space sensitive, so use spaces and not tabs. ## _Red_ (write a failing test) In the spirit of [red, green, refactor](/posts/red-green-refactor), we’re going to write a test, watch it fail, remediate, and then watch it pass. In your controls directory, create a file called `example.rb`. In that file, let’s add some tests. Before that, however, let’s create a variable so that we don’t have to repeat ourselves. Add this to the top: ```ruby resource_group = 'my-resources' ``` Now we’re going to add our controls. Here are three different tests, now see if you can tell what they’re testing: ```ruby control 'azurerm_virtual_machine' do describe azurerm_virtual_machine(resource_group: resource_group, name: 'my-vm') do it { should exist } its('type') { should eq 'Microsoft.Compute/virtualMachines' } end end control 'azure_network_security_group' do describe azure_network_security_group(resource_group: resource_group, name: 'nsg') do it { should exist } its('type') { should eq 'Microsoft.Network/networkSecurityGroups' } its('security_rules') { should_not be_empty } its('default_security_rules') { should_not be_empty } it { should_not allow_rdp_from_internet } it { should_not allow_ssh_from_internet } end end control 'azure_virtual_network' do describe azurerm_virtual_network(resource_group: resource_group, name: 'my-network') do it { should exist } its('location') { should eq 'centralus' } end end ``` `control 'azurerm_virtual_machine'` This control is simply checking that the virtual machine (VM) exists and that its type is `Microsoft.Compute/virtualMachines`. It’s also expected to be in the resource group that we defined as `my-resources` and the vm name should be `my-vm`. `control 'azure_network_security_group'` This control is also checking the resource group called `my-resources` for a network security group called `nsg`. The actual tests are pretty clear. It should exist. Its type should be `Microsoft.Network/networkSecurityGroups`. It should have rules in addition to the default rules. Additionally, it should have a rule that allows remote desktop (RDP) and SSH. `control 'azure_virtual_network'` And finally, in that same resource group called `my-resources`, there should exist a virtual network called `my-network`, and it should exist in the `centralus` region. ## _Red_ (watch it fail) There are two different ways we’re going to run this profile against your subscription. First, we’re just going to run it locally, and second, we’re going to run it against your profile in git, but that will come later after we’ve created some resources in Azure to test against. It’s helpful to run locally when you’re developing your profile so that you don’t have a bazillion git commits of incorrect tests, so let’s do that now before you commit your work. _Note_: This is not making any changes to your subscription. From the command line of your choice, run this command from the InSpec profile directory on which you’re working. ```shell inspec exec . -t azure://[your-azure-subscription-id-here] ``` If you break that command down, it just means that you’re executing the InSpec profile (`inspec exec`) at the current directory (`.`), and you’re targeting (`-t`) your Azure subscription. This InSpec run should fail, but it _should_ be able to connect to Azure. Your failure should look similar to this: ```text Profile: InSpec Azure Demo (inspec-azure-demo) Version: 0.1.0 Target: azure://[hidden] × azurerm_virtual_machine: '' Virtual Machine (5 failed) × '' Virtual Machine should exist expected '' Virtual Machine to exist × '' Virtual Machine should have monitoring agent installed undefined method `osProfile' for nil:NilClass ✔ '' Virtual Machine should not have endpoint protection installed [] ✔ '' Virtual Machine should have only approved extensions ["MicrosoftMonitoringAgent"] × '' Virtual Machine type should eq "Microsoft.Compute/virtualMachines" expected: "Microsoft.Compute/virtualMachines" got: nil (compared using ==) × '' Virtual Machine installed_extensions_types should include "MicrosoftMonitoringAgent" expected [] to include "MicrosoftMonitoringAgent" × '' Virtual Machine installed_extensions_names should include "LogAnalytics" expected [] to include "LogAnalytics" × azure_network_security_group: '' Network Security Group (6 failed) × '' Network Security Group should exist expected '' Network Security Group to exist × '' Network Security Group should not allow rdp from internet undefined method `[]' for nil:NilClass × '' Network Security Group should not allow ssh from internet undefined method `[]' for nil:NilClass × '' Network Security Group type should eq "Microsoft.Network/networkSecurityGroups" expected: "Microsoft.Network/networkSecurityGroups" got: nil (compared using ==) × '' Network Security Group security_rules undefined method `[]' for nil:NilClass × '' Network Security Group default_security_rules undefined method `[]' for nil:NilClass Profile: Azure Resource Pack (inspec-azure) Version: 1.2.0 Target: azure://[hidden] No tests executed. Profile Summary: 0 successful controls, 2 control failures, 0 controls skipped Test Summary: 2 successful, 11 failures, 0 skipped ``` If it doesn’t, then you’ll need to try some troubleshooting. You can start with the following: - your InSpec installation (run `inspec -v` to make sure you have a version) - your `inspec.yml` file (check against [this](https://github.com/anniehedgpeth/inspec-azure-demo/blob/master/inspec.yml)) - your `.azure/credentials` file ([this](https://github.com/test-kitchen/kitchen-azurerm#configuration) is a good resource) - your service principal being logged into Azure from the command line properly ([PowerShell](https://docs.microsoft.com/en-us/powershell/azure/authenticate-azureps?view=azps-1.0.0#sign-in-with-a-service-principal) or [Azure CLI](https://docs.microsoft.com/en-us/cli/azure/create-an-azure-service-principal-azure-cli?view=azure-cli-latest#sign-in-using-the-service-principal)) - are you in the right directory :) If you did successfully run your InSpec profile against your Azure subscription and get the expected failures as noted above, then _great_! Now _commit that bad boy to git_, and let’s move on to the next step! Let’s remediate those failures by adding some resources to your subscription so that those tests pass. ## _Green_ (make the tests pass with Terraform) To make this easier, we’re going to use Terraform to create the resources InSpec expects to see in this Azure subscription. If you want to do this without Terraform, say manually in the portal, with PowerShell, Azure CLI or whatever, feel free! But IMHO, this is the easiest way to remediate our failures. Onward! First, make sure Terraform is installed by running `terraform -v`. Good? Good (I hope). Now, let’s create another directory outside your InSpec profile (doesn’t matter where, just _not in_ your profile). Call it `inspec-azure-terraform-demo`. You can either clone [this repo](https://github.com/anniehedgpeth/inspec-azure-terraform-demo.git) that I already pre-baked for you, or just copy [this file](https://github.com/anniehedgpeth/inspec-azure-terraform-demo/blob/master/vm.tf) and paste it into a file called `vm.tf`, and open that in your editor of choice (mine’s Visual Studio Code). If you’re looking carefully, I snuck in a resource at the very end of this file that will run InSpec for us! Pretty cool, right? It looks like: ```ruby resource "null_resource" "inspec" { provisioner "local-exec" { command = "inspec exec https://github.com/anniehedgpeth/inspec-azure-demo.git -t azure://${var.subscription_id}" } } ``` What you’ll need to do, though, is _change that git url to point to your repo_. Remember before when we just used the path `.` to point to our profile? Well, now we’re going to use your git url as the path to point to the profile. You can run this now to execute InSpec against your profile stored in git, just to see that it works. Remember to replace the necessary values. ```shell inspec exec https://github.com/[your-git-here]/inspec-azure-demo.git -t azure://[your-azure-subscription-id-here] ``` You do need one more thing that is not in this repo, and that’s a variables file. This Terraform file uses several variables that you don’t want to commit to source control (so add it to your `.gitignore` file if you commit this), so create a file called `terraform.tfvars` that looks like this filled in with your Azure service principal (spn) info: ```ruby subscription_id = "REPLACE-WITH-YOUR-SUBSCIPRTION-ID" client_id = "REPLACE-WITH-YOUR-SPN-CLIENT-ID" client_secret = "REPLACE-WITH-YOUR-SPN-CLIENT-SECRET" tenant_id = "REPLACE-WITH-YOUR-SPN-TENANT-ID" ``` From the command line of your choice, run these commands from this directory: ```shell terraform plan ``` When you run this command, Terraform is comparing what is in the `tfstate` file, .tf files, and the Azure subscription to see what needs to be created or changed. If this succeeds, then you can run this to provision the resources: _Note_: This is creating resources in your Azure subscription. ```shell terraform apply ``` _Hopefully_, your output will look like the following—all passing tests now: ```text Profile: InSpec Azure Demo (inspec-azure-demo) Version: 0.1.0 Target: azure://[hidden] ✔ azurerm_virtual_machine: 'my-vm' Virtual Machine ✔ 'my-vm' Virtual Machine should exist ✔ 'my-vm' Virtual Machine type should eq "Microsoft.Compute/virtualMachines" ✔ azure_network_security_group: 'nsg' Network Security Group ✔ 'nsg' Network Security Group should exist ✔ 'nsg' Network Security Group should not allow rdp from internet ✔ 'nsg' Network Security Group should not allow ssh from internet ✔ 'nsg' Network Security Group type should eq "Microsoft.Network/networkSecurityGroups" ✔ 'nsg' Network Security Group security_rules should not be empty ✔ 'nsg' Network Security Group default_security_rules should not be empty ✔ azure_virtual_network: 'my-network' Virtual Network ✔ 'my-network' Virtual Network should exist ✔ 'my-network' Virtual Network location should eq "centralus" Profile: Azure Resource Pack (inspec-azure) Version: 1.2.0 Target: azure://[hidden] No tests executed. Profile Summary: 3 successful controls, 0 control failures, 0 controls skipped Test Summary: 10 successful, 0 failures, 0 skipped ``` After you are finished, don’t forget to destroy the resources you just created with: ```shell terraform destroy ``` ## Concluding Thoughts There are a ton of resources ready to use that you can find [here](https://www.inspec.io/docs/reference/resources/#azure-resources). I encourage you to take a look and explore what all can be audited with InSpec out of the box. This is not a new tool, by any stretch. I remember at ChefConf 2017 talking to [Dominik Richter](https://twitter.com/arlimus?lang=en), co-creator of InSpec, about it, and I had to keep it hush because it wasn’t released yet. I was very eager to use it because I was working with Terraform a lot at the time, and I could see a ton of value in it. After InSpec 2.0 was released with Azure resources, I gave it a go, but it was buggy for a little while, or maybe _I_ was buggy, so I didn’t use it. Whatever the case, I overcame the user error and the bugs got fixed, and it’s super easy now! Like so easy I want to use it for everything. _Who has need of validating Azure resources in your organization?_ Help them out and whip up a quick profile! Or just send them this post and show them how easy it is to use. Seriously, they’ll love you for it. --- # Terraform Changes The World URL: https://hedge-ops.com/posts/terraform-changes-the-world/ Discover how Terraform, a tool for building, changing, and versioning infrastructure, is making a significant impact on in the 2018 US Elections. We’ve all been subject to the scare-mongering around how technology will _take-over_ and AI robots are going to kill us all and whatever. Really, though, we all know that technology is neutral, not inherently good or evil. So I really love a good story about how someone harnessed technology for positive change for humanity. Regardless of your political stance, I think you can appreciate how the story I’ll relay to you impacts the world. I went to my first HashiConf this year, and I really enjoyed it. I want to give you a recap of my favorite talk called [_How Terraform Will Impact the 2018 US Elections_](https://www.hashiconf.com/schedule#nicholas-klick-dan-catlin). > In mid 2017, ActBlue began using Terraform to revamp its donation platform, a > system which has accepted and processed over $2 Billion for political campaigns > and nonprofits on the progressive left. The process began by leveraging Terraform to migrate a > > PCI-compliant credit card vault to AWS and quickly expanded to support orchestration of the majority of the infrastructure, including non-PCI environments and a Fastly configuration. The agility, modularity, and transparency of Terraform has afforded the ActBlue DevOps team the ability to deliver more features and more responsiveness to our platform during a period of massive growth of Democratic donors, campaigns, and initiatives. This talk will cover the deep technical details of how we use Terraform, as well as how we have promoted and evangelized Terraform across technical teams. (from the HashiConf schedule) As opposed to some of the more super intense, deep-dive technical talks, this was more simple in approach, and the speakers, Nicholas Klick and Dan Catlin, owned that but were thorough in explaining the benefits of using Terraform. The striking thing to me was how their simple approach to harnessing all that Terraform has to offer could literally change the world. You may know that [I really love Terraform](/posts/terraform-and-azure), so I was already interested in whatever they had to say. They hit all the basics, and nothing was really _surprising_ because I agreed with all of it. The thing that made this talk so magnificently profound to me was the impact on our world that they give a healthy amount of credit to Terraform. They noted the benefits of Terraform, for which we can all agree: 1. Infrastructure as Code (IaC) 2. Avoids drift 3. Opening black boxes 4. Lowers the barrier of entry for developers 5. Review changes 6. Reduces time to understand change 7. Enables dev and ops collaboration ![How Terraform Informs our Future Slide](/article_images/2018-11-09-terraform-changes-the-world/terraform.jpg) Beyond that, the modularity was a huge plus for them. 1. Terraform modules 2. Account segmentation 3. Works across providers 4. Code reuse 5. Common configuration—single language for many providers 6. DRY (Don’t Repeat Yourself) Code 7. Variations on common themes They found they could be very agile in their approach to developing their IaC strategy because of the “emergent benefit of transparency and modularity” and an increasing developer engagement, leading to a rapid rate in which they could develop, scale, and respond. They found that they could scale and move quickly because of the simplicity and agility that Terraform provided for them. Their company, [ActBlue](https://secure.actblue.com/), is empowering underrepresented candidates and taking super-PAC’s money out of the equation by leveling the playing field and allowing money to flow to the people’s candidates, like Beto O’Rourke. And sure, he didn’t win, but no one can argue the impact on this election that he had and how we will likely see more of him, largely in part to the role that ActBlue played in his candidacy. ## Decision and Concluding Thoughts _Never underestimate simplicity._ Great designs that can change the world are built upon simple, strong foundations. --- # Packer and Azure Managed Images URL: https://hedge-ops.com/posts/azure-managed-images/ Explore the process of creating managed images in Azure using Packer and Chef. Learn how to use these images across multiple subscriptions and regions. I ran across an interesting question at work the other day for which I had to do a little digging, so I thought I’d share it with you to maybe save you some of the digging of your own. _Disclaimer:_ I’m only talking about _Azure_ here, so if you see me write _subscription_ just know I’m talking about an Azure RM subscription. Also, this assumes that all the subscriptions are under the same AAD [(Azure Active Directory)](https://www.googleadservices.com/pagead/aclk?sa=L&ai=DChcSEwji4t7rvdbdAhWXVw0KHdDkBnkYABAAGgJxYg&ohost=www.google.com&cid=CAESQeD2PH4wHnpZykrCS1AHYXFBpYP7yBGgLS7gu5xsKLi9XOAWHRtj7_3RcCKelJEoFJ6t5nH-o-agHVvInP1yAE4n&sig=AOD64_3D8HbbMxK7ebZZMwHLHDISMcbCXA&q=&ved=2ahUKEwjuntjrvdbdAhUCoVMKHV_7BhsQ0Qx6BAgCEAI&adurl=) group and that you or your [Service Principal](https://docs.microsoft.com/en-us/powershell/azure/create-azure-service-principal-azureps) have access and rights to the necessary subscriptions. ## The Problem and the Goal We wanted to create managed images at a base level so that the provisioning and configuring is a bit quicker and less error-prone since there would be less to do. We’d have just one base image for each type of server, i.e. web, agent, SQL, etc. It was important for us that we didn’t have to keep the same image in several different subscriptions. Our end goal was to create a bunch of managed images in Azure using Packer and Chef and use them across several subscriptions and regions, as opposed to having the same image in multiple subscriptions. We were already doing that, and it wasn’t working for us. We had a lot of managed images in several different subscriptions, which is wasteful and error-prone. How can you ensure that all the images are up-to-date and the same? You may look in your desired subscription, see that the image you want isn’t there, and create a new one. But, “Oh wait,” you say, “let’s just make this one little tweak to the code first,” and now your image is different from the standard image. As you can see, this can get out of hand and become very error-prone quickly, so wouldn’t it be easier to just have one golden image for each of your component servers? Also, we don’t want to have to keep the un-generalized OS disk around for these images. They’re just base images, so it’s not necessary, therefore we don’t want to pay for something that’s not necessary. Fear not, dear friends, I learned that we are able to achieve this state, however, I’d like to clear up a few questions we had along the way. ## Questions ### 1. What’s the difference between an Azure _Snapshot_ and an Azure _Managed Image_? This is an important distinction because they are not interchangeable and what you can do with one, you can’t necessarily do with the other. I don’t find the Microsoft documentation to be super clear about this, so when I was researching it, I kept thinking that I would be able to do something, but really I couldn’t because I was reading about snapshots and not managed images. I ran across a GitHub discussion where [@Karishma-Tiwari-MSFT](https://github.com/Karishma-Tiwari-MSFT) laid it all out very clearly. [She says](https://github.com/MicrosoftDocs/azure-docs/issues/12540): > A VM \[managed\] Image contains an OS disk, which has been generalized and needs to be > provisioned during deployment time. OS Images today are generalized. This is meant to be used as a > _model_ to quickly stamp out similar virtual machines, such as scaling out a front-end to your application in > production or spinning up and tearing down similar development and test environments quickly. > An image of a virtual machine is a copy of the VM which encompasses the full definition of \[the\] > virtual machine’s storage, containing the OS disk, all data disks, data files and applications. It > captures the disk properties (such as host caching) you need in order to deploy a VM in a reusable unit. > A Snapshot contains an OS disk, which is already provisioned. It is similar to a disk today in that it is > _ready-to-use_, but unlike a disk, the VHDs of a Snapshot are treated as read-only and copied when > deploying a new virtual machine. A snapshot is a copy of the virtual machine’s disk file at a given point in time, > meant to be used to deploy a VM to a good known point in time, such as check pointing a developer > machine, before performing a task which may go wrong and render the virtual machine useless. I thought she did a great job describing the difference, and Microsoft should use it for their documentation, but I digress. So if the snapshot isn’t generalized, then that means that you can’t do certain things that you might want to do with an image, like change the hostname. It’s not generic enough like an image is. What I will be discussing in this post is definitely _Managed Images_ not _Snapshots_. As a freebie, though, I can tell you that Snapshots seem much easier to move around. The PowerShell module _AzureRmSnapshot_ will work for moving snapshots across regions and subscriptions. Likewise, there is a PowerShell module for copying managed disks, as well, but I have yet to find one for managed images without the un-generalized OS disk (more on that later). ### 2. Does Packer allow you to publish Managed Images to multiple subscriptions and regions? Sort of, but it’s not very elegant for Azure. In your [Packer template](https://github.com/hashicorp/packer/blob/master/examples/azure/windows.json#L12), each region and subscription would need its own builder and unique name for which you need to add a name parameter to this section. Packer is essentially building a VM, generalizing it, making it into an image, and then deleting everything except for the image. That means that a virtual machine is getting built into _each_ of those subscriptions and regions in order to create an image out of it. That’s a bit heavy for my liking. It’d be great if Azure had the capability of copying managed images from subscription to subscription and region to region, but it doesn’t without the un-generalized OS disk. I’m not sure why the capability isn’t there for Azure (it is with [AWS](https://www.packer.io/docs/builders/amazon-ebs.html#ami_regions)). Either Packer would need to bake into their code to copy the images over _before_ generalizing the OS disk or Azure would need to create a way to copy managed images without the un-generalized OS disk. Regardless, we’re kinda stuck doing it this way for now. (09/23/2018) ### 3. Does Azure allow you to use Managed Images in one subscription to build virtual machines in another subscription? Quick answer: _Yes!_ This was a little confusing to me as the [docs say](https://docs.microsoft.com/en-us/azure/virtual-machines/windows/build-image-with-packer): > If you wish to create VMs in a different resource group or region than your Packer image, specify the > image ID rather than image name. You can obtain the image ID with `Get-AzureRmImage`. That’s all well and good, but it says nothing about creating the VM in another subscription. So I went over to the GitHub documentation for Test Kitchen’s `kitchen-azurerm` driver. And lo and behold, in [example 5](https://github.com/test-kitchen/kitchen-azurerm#kitchenyml-example-5---deploy-vm-to-existing-virtual-networksubnet-use-for-expressroutevpn-scenarios-with-private-managed-image) you can see that the driver uses an `image_id` rather than an `image_urn`. I kicked off a Test Kitchen build using an image from another subscription than the one I was provisioning into, and it worked! Under the hood, the kitchen-azurerm driver is an ARM template, so that simple little test validated for me that it would work to create a VM from an image in another subscription. If you were to use the Azure CLI, ARM template, Terraform, PowerShell, or whatever else to provision your virtual machine from the managed image, you’d simply specify the image ID (which contains the subscription ID) and not the image name. Here is an example of creating a VM in one subscription using an image in another subscription using [AZ CLI](https://docs.microsoft.com/en-us/cli/azure/ext/image-copy-extension/image?view=azure-cli-latest): ```shell az vm create --name test-ah123 --resource-group test-ah3 --image /subscriptions/1234-subscription-id-of-image-5678/resourceGroups/image-RG/providers/Microsoft.Compute/images/my-image ``` Which produces the output: ```json { "fqdns": "", "id": "/subscriptions/9876-subscription-id-to-create-vm-54321/resourceGroups/test-ah3/providers/Microsoft.Compute/virtualMachines/test-ah123", "location": "centralus", "macAddress": "xx-xx-xx-xx-xx-xx", "powerState": "VM running", "privateIpAddress": "10.0.0.4", "publicIpAddress": "40.123.456.789", "resourceGroup": "test-ah3", "zones": "" } ``` This was great news for us because that took away the need to create images in multiple subscriptions. There is one kicker: you have to make sure to turn on image-sharing permissions to your subscription. In one case, I had a subscription that I wanted to provision a VM into, and the image-sharing permissions were turned off, so I got this Test Kitchen error: ```text Failed to complete #create action: [{"error"=>{"code"=>"BadRequest", "message"=>"Image sharing not supported for subscription."}}] on windows-vm-azure ``` Right now the UserImageSharing feature is only in _Private Preview_. > _Edit as of Dec. 26:_ > I have just learned that Shared Image Gallery addresses all these issues. The feature now gives you the ability to > manage your images efficiently, share your images across subscriptions and regions, and scale your > > VM/VMSS deployments. The feature is currently in Public Preview and will be generally available Q > 1 2019. UserImageSharing, however, was only being previewed by a set of customers, and I just learned that it will never be generally available. You can find the documentation for Shared Image Gallery [here](https://docs.microsoft.com/en-us/azure/virtual-machines/linux/shared-image-galleries). ### 4. But how _do_ you copy Managed Images from one subscription or region to another? I was still really curious about how you would actually copy the images to other subscriptions just in case we ran across a situation where we couldn’t change the image sharing permissions on a subscription or whatever else might come up. I learned that you can use `az image copy` to copy managed images to another region and/or another subscription. However, the thing that makes it super inconvenient is that it relies on the _un-generalized source OS disk_ as the actual source of the copy. Therefore, if you want to copy the managed image to other regions or subscriptions, you will have to have the un-generalized OS disk from the original VM. This is important because I’m pretty sure that Packer doesn’t give you the option to copy it somewhere first. It does give you the option to not delete the original VM upon error but not upon a non-erred completion. The non-Packer automation workflow super sucks, and I’d love to hear from anyone that knows of a better solution. [This guy](https://michaelcollier.wordpress.com/2017/05/03/copy-managed-images/) has a detailed plan lined up, but it would probably look something like: 1. Create (ARM, PowerShell, Azure CLI, whatever) and configure (Chef, Ansible, etc.) a VM in a subscription however you want. 2. Take snapshot to preserve the OS disk 3. [Sysprep/Generalize](https://docs.microsoft.com/en-us/windows-hardware/manufacture/desktop/sysprep--generalize--a-windows-installation) the VM 4. Create an Image from the Generalized VM after it is finished deallocating. i.e. `$ az image create -g RG-one -n my-image --source vm-name` 5. Use `az image copy` or the PowerShell equivalent to copy the image from one subscription to another. See the command below and note that it has a `--target-subscription` parameter in which to put the name or ID of the subscription where the final image should be created. ```text az image copy --source-object-name --source-resource-group --target-location --target-resource-group [--cleanup] [--parallel-degree] [--source-type {image, vm}] [--tags] [--target-name] [--target-subscription] ``` It would look something like: ```shell az image copy --source-object-name my-image --source-resource-group RG-one --target-location centralus --target-resource-group other-RG --target-subscription 12345-target-sub-67890 # different sub than you're logged in as ``` But gah, doesn’t that sound like such a huge pain in the butt and prone to errors all over the place? All in all, it’s better just to use separate builders in the Packer template. Sure it’s doing the same thing, but it’s simpler to execute, and you don’t have to have the un-generalized OS disk lying around forever. ## Decision and Concluding Thoughts In the end, we’d like to never have to do the workflow in #4 because we will hopefully not _need_ to. Because Azure allows you to create a VM from an image in a different subscription and region, it is not likely that we will need to copy images. ![Green Mountain Peaks in View](/article_images/2018-09-23-managed-images/mountain-managed-image.png)^ It’s a mountain-scape image…get it?? And as always, if you see any errors in this post, [create a pull request](https://github.com/anniehedgpeth/anniehedgpeth.github.io) with the fix and give yourself credit! Cheers! --- # What My Bike Accident Taught Me About DevOps URL: https://hedge-ops.com/posts/bike-accident/ Discover how a bike accident taught me valuable lessons about DevOps: the importance of safety, practice, vulnerability, and teamwork in IT. I love to over-metaphorize things, and I’ve been wanting to do that in regard to my bike accident for some time now. I was in a bicycle accident back in April, and I’ve been thinking about its correlation to DevOps and/or IT in general ever since. A friend of mine who was a champion mountain biker advised me not to tell anyone about my accident here in Boulder, that it was really too embarrassing. Aside from this post, I’m going to heed her advice. I used to bike a lot, not like the spandex, biking in a pack type but rather just for everyday errands and running around the burbs. I lived in Texas at the time, and honestly I had fallen out of biking for a couple of years because it began feeling dangerous and not worth the risk anymore. I went from feeling like I was on a mission and being an ambassador for biking culture and working in exercise to your everyday life to giving up on it because it was an uphill battle as the many anti-bikers would honk or crowd my lane or just plain not see me because they’re not used to bikers. Fast-forward to this past April—I was living in Texas, and I had come up to Boulder, Colorado to house-hunt. I was by myself, too, because the hubs stayed home with the kids. One of the many reasons we chose to move to Boulder is the strong biking-as-an-everyday-activity culture with its amazing, [dedicated bike trail system](https://bouldercolorado.gov/goboulder/bike). As it turned out, we had already put an offer on a house sight-unseen the week prior, so my trip’s purpose pivoted from house-hunting to neighborhood scouting—school principal interviews, checking out some co-working spaces, meeting with friends, etc. On said scouting mission, I rented a bike instead of a car to get around town. Things were going well on the bike, although I didn’t have some of the things from home that would have made life on a bike easier, like my [Burley trailer cart](https://burley.com/product/travoy/) for groceries, my bike lights, bike gloves, etc. But I had a helmet, my leather jacket, and a backpack, so I was set. I had made myself nervous a few times going too fast downhill and going over a bump or going too fast coming up to a stop and fishtailing as I stopped. I was braking with both the front and back brake when I should have only used the back at times. And I was going too fast (for me) because I was nervous that I was inconveniencing others (a fear born in Texas where people do not like bikes on their roads). All these behaviors were indicators that I was out of practice and not being safe, but I noted those things and vowed to be more careful. I was scheduled to be there a solid week, so that Wednesday I had planned to try out a co-working place nearby (I was still working full-time that week and scouting during lunch and after work). After my morning standup, I was excited to bike over and get the rest of my day worked from an office instead of my Airbnb bedroom. I hadn’t even eaten breakfast or drank my coffee yet because I wanted to get the full experience of the co-working place that serves it each morning. It was supposed to be about a thirty-forty minute bike ride, but I was in a hurry because I’m awful with directions and needed to account for that. I’m also a bit of a people-pleaser and was worried about being away from my desk for so long, so I rushed. I was about five minutes from the Airbnb when I was headed down a long stretch of a wide residential through-street. There were a lot of rough patches in the road from cracks that had grown from ice expansion, and I was careful to steer around them. I blew through a stop sign since I had picked up some speed from my steady downhill descent when I saw another rough patch. I know that you’re supposed to look _away_ from obstacles because you will naturally steer toward what you’re looking at, but this time I was worried about going to the left because I didn’t know if there was a car coming up behind me or not. I found myself biking directly into the hole, and I chose instead to brake. And I broke hard. With both brakes. Like a noob. What happened next felt like about ten minutes worth of action but really probably took about 10 seconds. My front tire stuck into that hole while my back tire flipped up and over, sending me flying through the air, over my handlebars. I hit my head a little (I don’t know really how hard), and I don’t know if I blacked out or not because I very well could have had my eyes shut the whole time. When I hit my head, the chin strap split my chin open, and my right incisor tooth went through my lip when it hit the pavement, chipping in the process. This also caused my jaw to go out of alignment causing a lot of pain and swelling, so I could only drink smoothies for about a week or so. My erroneous instinct was to put my hands out in front of me to break my fall. They skidded over the pavement with all the force of my speed and body weight behind them for several feet. Bloodied and raw, I couldn’t even move them. I did eventually roll over onto my shoulder and skidded my elbows, but my very expensive, leather jacket took the brunt of that damage. As I lay there in the street, my body did not want to move. I tried yelling for help, but all the wind was knocked out of me. After my breath finally came back to me in a deep gasp of air that filled my lungs with relief, I cried out for help a few times, hoping someone would hear because I was really afraid that I could not move. Someone finally drove by after a minute, maybe five, I don’t know. “Oh my gosh, are you okay?” “No,” I said. “Can you help me?” He called 911, and a couple of neighbors came out. One nice man took care of my bike and sunglasses and texted me his number when I was ready to get them. He even held my phone, so I could call my husband there from the middle of the street and tell him what happened. Another passerby assessed me for a broken neck, asking me a bunch of questions to see what I could _feel_. “Do you feel any tingling in your hands,” he asked. “Uh, I don’t know. I don’t think so.” “Can you move your fingers?” “Uh huh.” “Can you move your feet?” “I think so.” “Okay, that’s good, but don’t move until the paramedics get here.” I don’t know if he knew anything or not, but it made me feel better. He was able to give the paramedics the rundown so that I didn’t. My eyes were closed almost the entire time because I was in so much pain. Ironically, because of the guy’s assessment, the paramedics thought that I was up moving around, so they took their time getting there, not knowing that I was still laying in the middle of the street. I think a neighbor said it took like fourteen minutes. When they finally got there they took off my backpack and jacket, and I thought they were going to break me. They hoisted me into the ambulance, and I felt relieved. I also wondered how much this was going to cost me out of pocket. That was my first time in an ambulance, and, whoa baby, it was no fun. The trauma plus the bumps and crazy driving plus facing backwards plus the pain of the accident really makes a person want to throw up, and I almost did until he gave me some blessed Zofran. I finally made it to the hospital, and they did a bunch of x-rays and a CT-scan (another first for me). They also had to take a scrub brush to my raw palms to get out all the debris. That was not lovely at all. I was there about six hours, and I probably cried about 50 times. I knew that it was just the trauma that was causing me to cry, so every time any hospital staff came in my room while I was crying I would reassure them, “Don’t worry, come on in; I just keep crying. I can’t help it.” I was then given the all-clear to go home. All in all, I fared extremely well (especially for how badly I felt). The result of my foolish blunder could have been so much worse, and I am very well aware and grateful. I left with: - raw palms, but healed quickly because of the painful cleaning - hands were bruised all the way through for about two weeks - a chipped tooth and hole in my lip - busted chin that was glued shut - torn cartilage in my left wrist for which I will undergo surgery in August - chipped bone fragment in two places in my right pinky finger and damaged ligament for which I am in physical therapy ![Bike Accident Progression](/article_images/2018-07-19-bike-accident/progression.jpg) Not bad! Inconvenient, sure, but you know what’s more inconvenient? Dying. Dying is definitely more inconvenient, so I will take these minor injuries over dying any day. I hadn’t eaten or even drank water all day (because of the possibility of surgery), so I was ready to get out of there. While I was at the hospital my sweet husband arranged for me to fly home that evening because the doctor told me that the next day would be the hardest, and while my Airbnb hosts were nice, they didn’t sign up to take care of me, so I welcomed the rough flight home. The problem was that I needed help getting packed up but didn’t know anyone in town and my hosts were at work, so Michael called my realtor’s assistant, and she came and picked me up at the hospital, took me to get some food (some organic cola product for my caffeine headache and sushi rolls which I had to swallow whole), and packed up my suitcase at the Airbnb. Then I Lyfted to the airport and came home to recover. I’m wondering at this point if you’ve guessed what the parallels to DevOps are or if you’re waiting for me to pull some out of my hat because you think I’m crazy. Well, here they are: 1. _When we trade velocity for common sense, it will not always work out for us._ Sometimes we just aren’t careful and want to move too quickly, and we _know_ what we should be doing and even tell ourselves that we’re going to do it, but we chase after that goal with velocity (co-working place / better application) at the expense of safety, and we end up crashing anyway. I have had a client like that in the past. They wanted so badly to move quickly with their software product and release often, but they weren’t careful (didn’t have proper pipeline testing, isolation, versioning, etc.), and in the long run it cost them more time. 2. _Cleaning promotes healing._ How many times have I seen companies leave an application/pipeline/whatever in poor health for the sake of moving forward with their plans? How can you keep moving forward if you’re hurting? Take the time and energy and endure the pain of clean up, and you’ll be _more_ effective in the long run. 3. _Sometimes expensive stuff really does keep you safe._ I was wearing the most expensive article of clothing that I own—my Joie leather jacket (go ahead and Google it and judge me). And it was pretty shredded, but I took it to the leather repair shop, and they fixed it completely! You can’t even tell it was in an accident. It that had been some cheap $50 jacket, it would have been ruined. And sometimes we want the cheap or free tool to save us, but if we put some actual money into a tool that is proven, then it will likely come through for us. 4. _Being safe takes practice._ I had gotten out of practice on my bike and suffered the consequences. There’s a guy, [Nick](https://nickhudacin.wordpress.com/2018/05/22/when-its-hard-do-it-more/), at my current client’s office that has a good practice of repetition. When he and his team do something for the first time, after it works perfectly, they tear it down and rebuild it another few times just for practice. Another guy recently was complaining about it taking so long to get started with Test Kitchen, and, yes, that’s annoying, but it’s only like that at first. After practice, it becomes second nature, like any exercise. 5. _Vulnerability is good from time to time._ Noob mistakes are really embarrassing and humbling, but if you allow yourself to be vulnerable in the midst of that failure, it will likely bring out empathy and compassion in your fellow humans. Everyone knows what it’s like to hurt. Likewise, everyone knows what it’s like to fail at something in technology. If you allow yourself to be rescued when it’s absolutely necessary, then you will contribute to building a culture of vulnerability and trust in your organization. If you want to grow in that, I suggest listening to this [talk by Sameer Doshi](https://www.youtube.com/watch?v=wNLa8HSXGX0). 6. _The stress of our self-imposed deadlines costs us life energy._ To those of you fellow over-achievers, slow down and enjoy the ride. If you don’t, you could be in for a wipe-out, burn-out, fly-over-the-handlebars kind of year, and it will cost you all the time you thought you were saving. 7. _Colleagues that debug together, stay together._ Even if you don’t know what you’re talking about, sometimes it really helps a person feel better to have someone to assess a situation together. I have no idea if that guy knew what he was talking about or not, but it made me feel so much better that someone cared whether I was paralyzed. Yeah, yeah, yeah, the [rubber duck theory](https://en.wikipedia.org/wiki/Rubber_duck_debugging), but _every now and then_, when someone’s really freaking out, just be the rubber duck. It really helps. 8. _Drop everything when there’s an incident._ My husband has a pretty intense and demanding job, plus he was solo parenting while I was out of town. But when he got the call, he sprang into action. Everything else was put on hold while he handled the logistics of getting me home and taken care of. I’ve seen people ignore IT incidents (or even just broken stuff) because maybe it didn’t affect them as much, or maybe they wanted to finish the task they were on before they got distracted onto something else, or whatever the excuse. But it benefits _everyone_ when the whole team is healthy, functional, and unblocked, so even if it’s not _your_ problem, get your ass in gear and help people when there’s an incident, or unblock people, or fix something broken when you discover it. ## Concluding Thoughts We can get in the thick of it at work sometimes and forget about using common wisdom in our everyday. There are lessons all around us if we’ll slow down enough to take heed of them. Velocity is great, but sometimes so is slowing down. ![Boulder Library Coffee Saying](/article_images/2018-07-19-bike-accident/boulder.jpg) --- # The ‘I Was Told’ Trap URL: https://hedge-ops.com/posts/i-was-told/ Explore the dangers of doing what you’re told in IT and how it affects productivity and responsibility. How to avoid this trap and take charge in your career. A lot of times in IT I’ll hear the phrase, “I was told”. For example, “I was told we are going to use Chef.” This is the worst possible phrase you can use. It suffers from two major defects: First it uses the _passive voice_. I think the passive voice is the enemy of productive and clear communication. In this context the passive voice shadows the listener from the person that did the telling and subtly tells you to not question the statement. “I was told to use Chef.” Who did the telling? Your mom? Your five-year-old son? Your boss? _Their_ boss? The teller implies that it doesn’t matter; you should just accept the statement because, “I was told”! But this isn’t the worst part of the statement. _I was told_ takes the teller completely out of the chain of responsibility for actions and results in their business. If _I was told to do Chef,_ and it doesn’t go well, then that’s not my fault at all. I can go about my career life with great pride in the fact that I’m only doing what I’m told and can chalk up failures to _management_ or _the crazy Chef evangelist_. _I was told_ needs to die. It’s the death knell of a career. It lulls its teller into a rejection of responsibility, and their skills fade away into oblivion. Here’s a secret: even though I wrote that on the Internet for the _whole world to hear_, you’re going to go to work tomorrow and hear _I was told_. It will be like the car that you buy and then see it everywhere. Everyone using the passive voice! Obfuscating the actions of others into an unquestionable force that we should all follow! And as that happens, I have some advice for you. Take charge. See what needs to change and change it. Call out a problem, suggest a solution, and solve it. Don’t complain about funding. Don’t complain about the other team that is stupid. Don’t complain about management, and _I was told_ yourself into oblivion. Stand up, take action, make the world a better place. --- # The Power of Context URL: https://hedge-ops.com/posts/power-of-context/ Explore how deep context in technology careers, particularly for those transitioning from other fields allows for a solid foundation of technical skills. I’ve coached a couple of people lately who have made a career change into technology from other disciplines and have noticed a pattern emerge that I want to share more broadly. People from other disciplines bring diversity to their roles. In this context, we’re not talking about their gender or ethnicity. We’re talking about their _background_. From a purely business perspective, we want higher diversity and inclusion to remove the efficiency barriers that exist from the echo chamber that is generated from only having one group present. If a disproportionate amount of women were in technology, that same echo chamber would exist, and we would want to hire more men. Same with all other groups. With that in mind, people we hire onto our teams that are in underrerpresented groups or lack the traditional indoctrination of the culture of IT through universities (i.e., are making a career change) have special powers. They can see things the rest of us can’t. They work around the echo chamber and can truly be powerful. Because these people generally lack deep technical skills, they experience immense temptation to get outside a technical role and into a _better fit_. After all, if a new DevOps Engineer is growing and only OK at coding Ansible scripts, but is removing barriers left and right on getting a critical project delivered, why not let her focus on the latter? This would be a huge mistake. Don’t do it. Stay technical. Here’s the secret: when people have the technical _context_ within which they can solve human and technical problems together, their value shoots through the roof. If you walk away from opportunities to deepen your technical skills in an effort to maximize your value, you’re making a huge short-sighted mistake. Without those technical skills, you won’t be able to as effectively lead in the future. You’ll be a _passenger_ in the software value creation process. Passengers in this process don’t get compensation, respect, and rewards equal to the _drivers_, or producers (unless they are in sales, which is another kind of producer). Let me leave you with an example, used by permission from my wife. I and many people who know her think that Annie will be a high performing IT executive in five years. She’s got all the intangibles you need to be a VP Engineering. In order to get there, though, she needs to dive into her Chef projects for her customer, continue to build her technical skills by learning Ruby deeply, and continue to build her technical prowess. Without that, she would probably stall out and lack the direct engagement that is needed for effective executive-level management. So if you’re in the middle of a career change or are supporting people who are, don’t take the bait of getting a non-technical role early on. Build on technical proficiency paired with the soft skills. Let them grow together. With that context, you’re truly on the path to long-term success. --- # Chef Certification Tests URL: https://hedge-ops.com/posts/chef-certification-tests/ Discover the journey to becoming a Certified Chef Developer. This demystifies the Chef Certification tests and provides useful study materials and tips. Last week I finished the last of the three required tests to become a _Certified Chef Developer_ (and passed - woohoo!). I took it kind of slowly, and just made it a quarterly objective to either study for or take a test over the past six months. It was a great experience, and I know that the tests are a bit mysterious, so I wanted to try and demystify it for you. (Their [FAQs](https://training.chef.io/certification-faq) are also helpful.) ![Chef Certification Map](/article_images/2018-04-07-certified-chef-developer/ChefCertification.png) My understanding of the badges is that you need three to get certified. The two required badges are _Basic Chef Fluency_ and _Local Cookbook Development_, and then you have a choice between _[Extending Chef](https://training.chef.io/extending-chef-badge)_ and _Deploying Cookbooks_. The [InSpec badge](https://training.chef.io/auditing-with-inspec-badge) is the wildcard. I don’t know what their plan is for that one yet, which is why I haven’t taken it. I will after I hear more about it. Back when I took this test I [told you](/posts/basic-chef-fluency) that I had created a GitHub [repo](https://github.com/anniehedgpeth/chef-certification-study-guides) to house my study materials. I continued to add to it as I studied for the exams. I found it really useful to go through the PDFs that they created with the lists of topics that would be covered in the exams and just write out as much info as I could about that topic. Searching through docs.chef.io was the best bet for these topics, not only because that’s considered the source of truth, but also because it familiarizes you with where to find everything in the docs. This is particularly useful in the exam when you are allowed access to docs.chef.io and need to use your time wisely. ![Chef Certification - Path](/article_images/2018-04-07-certified-chef-developer/paths.png) ## The badges I got 1. [Basic Chef Fluency](/posts/chef-certification-tests#basic-chef-fluency) 2. [Local Cookbook Development](/posts/chef-certification-tests#local-cookbook-development) 3. [Deploying Cookbooks](/posts/chef-certification-tests#deploying-cookbooks) Now, Chef has some excellent study materials. I don’t know why, though, but they just didn’t jive for me for whatever reason. I would start on a [Chef Rally](https://learn.chef.io) lesson and then never finish. I think maybe for me, they get a little too detailed with stuff that I don’t work with often really, and I get confused, then discouraged, then distracted. What does work for me is starting at a super elementary level and working my way up. [Chef Rally](https://learn.chef.io) does work for a lot of people, though, so give it a go and see if that’s you. It really is the best place to start because it’ll be the same wording and logic as the exams. [Linux Academy](https://linuxacademy.com/devops/training/course/name/certified-chef-developer-basic-chef-fluency-badge): They have classes for the first two badges, and I didn’t have time to complete them, but they have a practice test and note cards that were nice. Otherwise, I’ll share below what I did to prepare for each exam. Before I do that, I’ll share some notes about the exam experience that pertain to all three exams. ## Notes about my Exam experience - There was not a visible timer, so if you don’t remember the exact time you started, then you have to chat with the proctor to ask how much time is left, which wastes time. - If your internet connection is interrupted (mine was six times for one exam), they will give you the lost time, however, this is not ideal, because it makes you frantic, and you can’t go to the bathroom (it’s a problem). - You may not use the restroom or else you forfeit the remainder of your time (don’t ask how I know). - The setup may take a while since they have to scan your room and workspace, so plan accordingly. - The language of the questions is unnecessarily complicated. I had to reread some of them over and over only to find out it was a simple question with very confusing wording (exacerbated by nervousness). ![Basic Chef Fluency Badge](/article_images/2018-04-07-certified-chef-developer/badge-basic-chef-fluency.png) ## [Basic Chef Fluency](https://training.chef.io/static/Basic_Chef_Fluency_Badge_Scope.pdf) There are two basic components: a study sheet (cheat sheet) and a lab (kata). ### [Basic Chef Fluency Study Guide](https://github.com/anniehedgpeth/chef-certification-study-guides/tree/master/basic-chef-fluency) When studying for the [Basic Chef Fluency Badge exam](https://training.chef.io/basic-chef-fluency-badge), I studied this [guide](https://github.com/anniehedgpeth/chef-certification-study-guides/tree/master/basic-chef-fluency) daily until I was very comfortable going through the material. ### [Basic Chef Fluency Kata](https://github.com/anniehedgpeth/chef-certification-study-guides/blob/master/basic-chef-fluency/basic-chef-fluency-kata.md) This is an [exercise guide](https://github.com/anniehedgpeth/chef-certification-study-guides/blob/master/basic-chef-fluency/basic-chef-fluency-kata.md) meant for daily use. At the time that I was studying for this exam, I was not using Chef in my daily practice, so I was a little rusty. Doing this kata, I was able to get comfortable with Chef enough for navigating the topics of the exam. The idea is not to do the entire kata but to give yourself an allotted amount of time and start from the beginning each day. As you do this daily, you’ll get further and further each day because you’ll get faster and faster as you make those connections in your brain. ### [Basic Chef Fluency Kata Cheat Sheet](https://github.com/anniehedgpeth/chef-certification-study-guides/blob/master/basic-chef-fluency/basic-chef-fluency-kata-cheatsheet.md) As I was going through the kata daily, I would make reminders of what I did to solve the problem. That helped me to solidify it in my head and also gave me a quick reminder when I needed it. ![Local Cookbook Development - Badge](/article_images/2018-04-07-certified-chef-developer/badge-local-cookbook-development.png) ## [Local Cookbook Development](https://training.chef.io/static/Local_Cookbook_Development_Badge_Scope.pdf) You can expect this two-part exam to be tougher than the Basic Chef Fluency badge. It will be heavily focused on Test Kitchen, InSpec, and just basically creating cookbooks. So if you have healthy test-driven development practices with your cookbook development, then you will likely do just fine. If you don’t, then this test will expose that. And guess what, I have another [study guide](https://github.com/anniehedgpeth/chef-certification-study-guides/tree/master/local-cookbook-development)! ### [Local Cookbook Development Study Guide](https://github.com/anniehedgpeth/chef-certification-study-guides/blob/master/local-cookbook-development/local-cookbook-development-study-guide.md) The study guide is just like with the Basic Fluency badge. Study them daily and have someone quiz you. ### [Kata](https://github.com/anniehedgpeth/chef-certification-study-guides/blob/master/basic-chef-fluency/basic-chef-fluency-kata.md) If you’ll notice for this badge, I don’t have a new kata. That’s because we just kept adding more to the [Basic Chef Fluency kata](https://github.com/anniehedgpeth/chef-certification-study-guides/blob/master/basic-chef-fluency/basic-chef-fluency-kata.md) so that it really covers all three badges. If you want, instead of starting at the beginning, you can pick up at [Chef Server](https://github.com/anniehedgpeth/chef-certification-study-guides/blob/master/basic-chef-fluency/basic-chef-fluency-kata.md#chef-server) section and see how far you get daily (remember, this is meant to start from the beginning daily, as opposed to picking up where you left off daily). ### Local Cookbook Development - Notes from Exam’s Lab You have a choice for whether you want to take this on Windows or Linux. The first time I took this on Windows and failed. I was really shaky with: - How to install `IIS-WebServerRole` Windows Feature - How to install Windows Registry Keys (recursive true) - How to create a file with content and create the recursive directories The second time I took it on Linux and breezed through it. The Windows lab will require that you make 10 InSpec tests pass with the recipe. The requirements are very Windows specific (see above for examples). ![Deploying Cookbooks](https://ik.imagekit.io/hedgeops/site/article_images/2018-04-07-certified-chef-developer/DeployingCookbooks.png) ## [Deploying Cookbooks](https://training.chef.io/static/Deploying_Cookbooks.pdf) You can expect this two-part exam to be tougher than the first two badges. This particular badge provides an alternative path to becoming _Certified Chef Developer_. The alternative is taking the _[Extending Chef](https://training.chef.io/extending-chef-badge)_ exam, which is focused a lot on extending Chef’s capabilities, so creating OHAI plugins, custom resources, that type of stuff. Whereas, _Deploying Cookbooks_ is focused primarily on what you do with the cookbook after it’s created, such as `knife` stuff, Chef server stuff, environments, roles, etc. If you already have _Basic Chef Fluency_ and _Local Cookbook Development_ badges, then if you pass this badge you will become Certified. Yay! ### [Deploying Cookbooks Study Guide](https://github.com/anniehedgpeth/chef-certification-study-guides/blob/master/deploying-cookbooks/deploying-cookbooks-study-guide.md) This is the [guide](https://github.com/anniehedgpeth/chef-certification-study-guides/blob/master/deploying-cookbooks/deploying-cookbooks-study-guide.md) for this exam. As you can see, there are a few topics that are not filled in, and those are the topics I probably got wrong. (Feel free to create a pull request to add any info!) ### [Kata](https://github.com/anniehedgpeth/chef-certification-study-guides/blob/master/basic-chef-fluency/basic-chef-fluency-kata.md) If you’re rusty, I suggest going through the [kata](https://github.com/anniehedgpeth/chef-certification-study-guides/blob/master/basic-chef-fluency/basic-chef-fluency-kata.md) and starting at the [Chef Server](https://github.com/anniehedgpeth/chef-certification-study-guides/blob/master/basic-chef-fluency/basic-chef-fluency-kata.md#chef-server) section and seeing how far you get daily. ### Notes from Deploying Cookbooks Exam 1. Not being able to use dual-screens was a nuisance during the lab. 2. The first seven or so questions were nothing I encounter on a regular basis - lots of `chef-solo`. I had no clue. It felt like a disproportionate amount of questions, not a reflection of real-life use cases (to me, at least). 3. Using [docs.chef.io](https://docs.chef.io) was super awesome - much like real life. 4. It was not clear to me that I could click on the link to go directly to the Chef Server UI. I thought I had to configure the connection to Chef Server manually via the command line, which I’ve never done, so I spent a ton of time messing with that before I realized that I could go into the UI. 5. The lab was great once I figured that out. It was common, everyday stuff that everyone should know. If I had it my way, I would have done more of those questions and less of the first 10 multiple choice, which were not a reflection of my overall knowledge of Chef. ## “But I’m just not good at taking tests” That was me. I’ve never been good at multiple choice tests because I take too long second-guessing myself. Essay exams were always a breeze for me because I could explain what I knew, but I always felt multiple choice exams were always too harsh. Well, my very wise husband suggested that perhaps I wasn’t good at taking tests because I had never properly learned how to take a test, that perhaps I could create a game-plan for the test and see if I improved. Well, I did, and boy was he right. My anxiety went down, and I passed! Here’s what I did: 1. Opened up Atom and created a blank file to write notes. 2. Started the questions and went through the first 40 pacing myself at about 60 seconds per question (it’s a 90 minute exam). - Any time I got to a question I wasn’t super certain of, I’d type the question number in the blank Atom file to come back to it later. - If I knew exactly where the answer was in docs.chef.io and I knew I could find it quickly, then I would, but if it required digging, I typed the question number in Atom. 3. After the first pass at the multiple choice questions, I started and finished the lab. 4. Then I returned to my Atom file and began looking up all the questions in docs.chef.io and answering them. ## Concluding Thoughts Honestly, this whole process has made me rethink my approach to studying and test-taking. I used to have huge test anxiety, but when I realized that it’s just a skill that can be developed just like any other skill, I felt in control again, like I could move the needle on this instead of just feeling sorry for myself. Learning new skills is very empowering. --- # 2017 Year in Review URL: https://hedge-ops.com/posts/2017-year-end/ The highlights and achievements of 2017 from career advancements to personal growth, leading to an exciting future in Boulder, Colorado. At the start of 2018, [Annie](/about/annie) and I feel like we’re at the start of a brand-new journey. We’ve decided to move to Boulder, Colorado in the coming months. Our jobs are going great, so we plan on working remotely with our companies after making the move. We’re both very excited about what’s in store; every time I go to the mountains I come alive in a way that no other place can match. Before moving onto this next chapter, I wanted to write about some of the great things that have happened in 2017. On the automation front, we’ve settled on an automation stack that works for us using Chef, RunDeck, ARM Templates, Hashicorp Vault, Artifactory, and Salt. I abandoned my [experimental Cafe project](/posts/introducing-cafe) when I realized Chef was going in another direction. While that was painful to give up, I got to scratch a coding itch and realize that it was really up to me (and not a single vendor) to build the stack that solved our problems. After finalizing this approach to automation, I was able to build alignment with our central IT partners within NCR and make this our reference architecture going forward as we migrate some workloads to the cloud. It’s all very exciting and really a culmination of years of work and lots of great partnership with Chef, Inc. especially. At work in July I took on a new challenge to migrate our critical applications to a new hardware platform that would help us grow our business. It was a unique challenge for me because we wanted to move our critical applications to this platform before our peak season which started mid-November. So much of the second half of the year I became obsessed with delivering this project. It had no _DevOps_ associated with it, or code, or any of that. However, I loved it. I enjoyed rising to the challenge, bringing a lot of different departments together and delivering the project. Our peak season has been one of the most stable and successful yet for the critical applications we migrated, and that’s thanks to our efforts. In August, we had a successful DevOps Days in Dallas. I was really proud of two things: first, we put together a leadership summit that attracted forty of Dallas’ devops-minded IT leaders. I really enjoyed being in that group and thinking about DevOps from a strategic/leadership perspective. I was also proud of how well my friend [Megan Bohl](https://twitter.com/MeganBohl) did as a sponsor liaison and that we were able to get her a scholarship to [Tech Talent South](https://www.techtalentsouth.com/). Stay tuned with Megan; she’s going to be a star. This past year I was also impressed with Annie’s growth. She started the year only a few months into the job and very much struggling to put it all together and get herself on billable jobs. It felt at the beginning of the year like she was battling uphill for success in the industry. And along the way there were lots of people (don’t worry, we’re not mad at you) who suggested that Annie should be in a non-technical role. Instead of giving up, Annie dug in and studied hard. When the opportunities came, she worked extra and I watched the kids. She got help wherever she could get it. In May, she got a particularly challenging make-it-or-break-it project, and her colleague Scott Nowicki spent time after hours walking her through the challenges she was having, so she could deliver. That was a huge turning point for her. Another huge turning point was when her CEO helped her realize that if she wanted to be technical, she needed to \_focus_on that and stop speaking, blogging, marketing so much. That was the absolute right advice she needed right then. A lot of people would see her talents and say “you should be blogging!” but then ignore the technical skills that were growing so rapidly. Which reminds me of a third turning point for me personally. At DevOps Days, after our speaker dinner, Adam Jacob and John Willis heard our story and they both remarked at how unusually talented Annie is as a technologist. I must admit I had become blind to that reality because I had spent the last year working with her at the edge of her abilities. But the truth was evident: it’s not normal going from nothing to knowing InSpec in two weeks enough to write tutorials on it. It’s not normal to accomplish what she has accomplished, because, frankly, she’s good at it. She’s abnormally good. She has the potential to be a technical game changer for a valuable business. That makes me so glad she’s stayed on the path she’s on as a technologist. Late in the year Annie started working on a long-term contract with a company to help them with their Chef and Azure workloads. She’s integrated InSpec into their workflow and has done so much more than that. She loves finding the _right_ way that aligns with the business objectives and challenges. I can’t wait to see how much she’s able to accomplish there in the coming months. It has been an amazing journey to see Annie transform like this. Years ago I’d come home from work, and she would be taking wood from the side of the road and turn it into something beautiful and useful. It’s amazing to see her be able to transform those skills and that work ethic into a fantastic technical engineering position. And this is only the beginning! Next year I’m going to focus on two things: First I’m going to move to Boulder and find my place in that community. There are a lot of differences between Dallas and Boulder, and I’m hopeful that I can fit in. I’m there for more personal reasons: I want to bike everywhere, I want to live in a smaller place, and I want to hike up mountains. I would also love to snow ski for the first time. Moving to Boulder really is a dream come true! Second, next year I’ll be focusing more on the broader elements of our DevOps transformation journey at NCR. I am looking forward to influencing our growth and transformation beyond just code and automation improvements. I want to dramatically improve what we do and what we can do. I’m more excited than ever about the new challenges before me. I don’t think there will ever be another 2017 for Annie and me. It was a year of dramatic growth for both of us. We started the year both not quite sure about our futures, but finish off as valuable contributors of our companies and so happy about how we are both growing. My mantra has always been to go for growth first and everything else will follow. That’s where I feel we both are at, and I’m thrilled to see where this takes us. And finally, thanks to everyone who have supported us along the way. We couldn’t have done what we did without you. You know who you are, and believe us, we are so incredibly grateful for your support, friendship, and love. --- # Your First DevOps Project to Automate Infrastructure URL: https://hedge-ops.com/posts/devops-project/ Start your journey into DevOps with this comprehensive guide. Learn to automate infrastructure by doing with this simple project. Many people I talk to with a history in system administration or engineering want to learn about DevOps but don’t know how to get started. This post lays out some steps to create a simple website and follow many of the same tools and workflows I’ve used over the years to get things done. If you’re able to get through these steps with the help of a mentor, you’ll be well on your way to be able to be proficient in the world of DevOps. I recommend you follow the phases below in order. Don’t skip any steps. And if you get to a point where you’re thinking “I don’t know anything about git!” or whatever, have no fear. Stop, spend some time learning git, and then keep going. You can do this project without any previous technology experience, assuming you have someone help you look in the right places. And that’s the final thing: use this as a guide to get you on your way but find a technical friend to sponsor your learning and interact with you. That will be huge. We’ll deliver the project in phases: ## Phase 1: Simple Website In order to work well with DevOps automation and tooling, you need to know the basics about source control and editing code. This first phase gives you that foundation by creating a basic website on GitHub. 1. Create an account on [https://github.com/](/devops-project/GitHub) 2. Create a repository named `website` on GitHub 3. Clone your `website` repository locally using your terminal (`Terminal` on a Mac or `PowerShell` on windows). 4. Create a branch called `feature_index` 5. Add a file `index.html` to the branch that says `Hello, World!` in the text. You’ll want to make the change with [Visual Studio Code](https://code.visualstudio.com/). 6. Check that in 7. Create a pull request for your branch to be merged into `master` and assign the pull request to your mentor. If you don’t have a mentor, assign it to me, `mhedgpeth`! 🙂 8. Merge your pull request into `master` after it’s approved ### Resources for Phase 1 - [Try Git in 15 minutes](https://try.github.io/levels/1/challenges/1) - [Getting Started with Visual Studio Code](https://code.visualstudio.com/docs) - [Codecademy Learn HTML](https://www.codecademy.com/learn/learn-html) ## Phase 2: Simple Webserver Now that you have a website, you want to now create a server to host that website. This is a machine that will provide other machines with the contents of your `index.html` file from Phase 1. This machine will be a Linux machine hosted on [Microsoft Azure Cloud](https://portal.azure.com/) with [nginx](https://www.nginx.com/) running on it. 1. Create an account on Azure and activate your [free trial](https://azure.microsoft.com/en-us/offers/ms-azr-0044p/) 2. Create an Ubuntu virtual machine on Azure 3. SSH to that machine. If you’re on Windows use [Matt Wrock’s guide](http://www.hurryupandwait.io/blog/need-an-ssh-client-on-windows-dont-use-putty-or-cygwinuse-git) to getting an ssh client. 4. [Set up nginx on the machine](http://lmgtfy.com/?q=set+up+nginx+on+ubuntu) 5. Clone your git repository to `/var/www/html` on the server 6. Using the public ip assigned to your Ubuntu server, access the website (i.e. `http://[your-ip]`). Your website should show up. 7. Have a friend on another computer do this. They should see the website too. Magical! ### What We’re Learning My friend Nathan Harvey has said to me that you can’t automate that which you do not understand. If you’re going to _do the DevOps_ but can’t do it manually, then you’re going too fast! So we’re learning the basics here of setting up a machine. ### Resources for Phase 2 - [Setting Up Linux Virtual Machine on Azure](https://docs.microsoft.com/en-us/azure/virtual-machines/linux/quick-create-portal) - [Nginx Tutorial](http://tutorials.jenkov.com/nginx/index.html) - [Nano Tutorial](https://www.howtogeek.com/howto/42980/the-beginners-guide-to-nano-the-linux-command-line-text-editor/) for editing text files in an ssh session ## Phase 3: Deploy a Change Now it’s time to deploy a change! Let’s do this the old-fashioned way the first time around. 1. Create a new branch on your repository named `feature_myname`. 2. Update `index.html` to say `Hello, Michael!` 3. Create a pull request, get it reviewed, and merge it 4. On your webserver, update your `index.html` file from GitHub. 5. Get on Twitter and send me a message @michaelhedgpeth with a link. Show me that I didn’t waste my time! 🙂 ### What We’re Learning This is how a majority of people have deployed things for a long time. The manual way. You can’t realize what problems different tools solve before you do it manually. I’m purposely not giving you the exact steps to follow because I want you to find those steps on your own or with a mentor. Google it! This is how you learn. ## Phase 4: Create a Chef Cookbook to Automate Machine Setup Now think about what your life would be like if you had to do the above steps thousands of times on hundreds of servers. Your life would not be fun and in addition to that you would have hundreds of servers that were all a little bit different because, over time, you weren’t consistent. [This is why you use something like Chef](/posts/chef-community). This is a big technical jump from the previous phases. Take your time here. If you’ve never done this stuff you might spend a couple of weeks on it. You’ll have questions. Get on the [Chef Community Slack](http://chefcommunity.slack.com/) and post them on the #general channel. Don’t be afraid; we all were here and support you! 1. On GitHub create a repository called `my_website`. 2. Clone that repository locally and switch to a branch called `feature_nginx` 3. Set up `nginx` using the [package](https://docs.chef.io/resource_package.html) resource 4. Use [Test Kitchen](https://kitchen.ci/) to make sure your cookbook installs the package. Use kitchen with a [Policyfile](/posts/policyfiles). There will be a `Policyfile.rb` in the cookbook that defines your run list if you’re doing this right. 5. Write an [Inspec Test](/inspec) that ensures the package is installed 6. Create a PR and get it merged 7. Create another branch called `feature_website` 8. Using the [git resource](https://docs.chef.io/resource_git.html) clone your repository on GitHub 9. Write another inspec test to make sure that the website is served when you go to `http://localhost` and that the contents contain `Hello, Michael!` 10. Make sure that `kitchen test` works 11. Create a Pull Request and Merge into master ### Resources for Phase 4 - [Learn Chef Rally](https://learn.chef.io/#/). If you’re new to Chef, spend a lot of time here. Like days. Get through the basics and what I write above will make a lot more sense - [My Policyfiles Post](/policyfiles). My description of that feature. Do yourself a favor and use it. It will simplify your life. - [Annie’s InSpec Tutorials](/inspec). Still the best place on the internet to learn InSpec. ## Phase 5: Deploy Your Chef Cookbook to Azure Now it’s time to use your Chef Cookbook as a way to not have to manually deploy and update your machine. After you do this step, you’ll be able to consistently create as many machines as you want, with very little effort! 1. Create a new virtual machine on Azure using the portal 2. Create an account and organization on [manage.chef.io](http://manage.chef.io/). If you did the learn Chef rally above, you should already have this set up. 3. Push your cookbook policy to the chef server (see my [blog post](/posts/policyfiles) for the exact command). 4. [Bootstrap your node](https://docs.chef.io/install_bootstrap.html) with the Chef Server. This will run the policy on that node, setting up nginx and everything! 5. Hit the public IP associated with the node and see that it serves your website perfectly! Magic! 6. Now deploy a new change to your website and call it `Hello, Automated World!`. Tell me about it (@michaelhedgpeth on Twitter). I’ll give you a high five for getting this far! ### What We’re Learning Hopefully, you see the benefits of automating the setup of the machine. From now on, everything is consistent and just works. Your drama for making changes goes down. And you can do this as many times as it’s called for! ## Phase 6: Automate the Creation of VMs in Azure with Terraform Now that we have a Virtual Machine that we can easily set up, you might be tempted to quit. But there’s something lingering there: even though you automated what was _inside_ the virtual machine, you still have to manually set up the machine itself. This is easy enough when you have one machine, but what if you have hundreds? That’s where Terraform comes in: 1. Create a new repository in GitHub called `website_provision` 2. In that repo, create a terraform script that creates a virtual machine within a network with an external ip address 3. Make your Terraform script bootstrap the VM with your Chef Server and assign it to your policy 4. Watch with awe how Terraform allows you to create and destroy your _entire_ stack, all the way from the machines themselves to what’s on the machines (with Chef). ### What We’re Learning The environment within which your application runs is one of the most complicated aspects of the application. Thus, you should invest in automating that and keeping yourself away from the user interface. Terraform is a great tool for this. ### Phase 6 Resources - [Terraform Resources](https://www.terraform.io/intro/index.html) - [Azure Terraform Provider](https://www.terraform.io/docs/providers/azurerm/) - [Azure Terraform Examples](https://github.com/terraform-providers/terraform-provider-azurerm/tree/master/examples) ## Phase 7: Workflow Automation with GitHub Actions Now that we have an automated process that will deploy our stuff, we want to make our workflow easy to execute with GitHub Actions. 1. Create a rakefile for your `my_website` cookbook using [my blog post](/posts/cookbook-development-with-rakefile) as a guide. 2. Add a GitHub Actions build script 3. Create a branch called `broken` and add code that will break your cookbook 4. Check it into the branch, and watch GitHub Actions tell you it’s broken! ### What We’re Learning We’re automatically providing the accountability people need to know that their software still works. It’s better to learn that _as_ you’re making the changes rather than when they get to production. So GitHub Actions helps us see how, when we change software over time, that software still meets our expectations. It reinforces the best practices for your team: you should run test kitchen before checking in changes. The safety imposed by GitHub Actions will keep you on the straight and narrow path! ### Resources for Learning - [GitHub Actions for Chef Cookbooks](https://dev.to/chefgs/writing-a-github-actions-workflow-for-chef-cookbook-cf1) ## Extra Credit I think if you do the above project, you’ll be well on your way to getting things working. Here are some other ideas, for extra credit: - With [Hashicorp Vault](https://www.vaultproject.io/) store a secret and have your Chef cookbook (using the vault gem) read the secret and write it to your web page - Create a GitHub Actions pipeline that will deploy your Terraform templates - Instead of using Chef, use Docker to deploy your application ## Conclusion This project will give you the context to know the basics of automating infrastructure. With this basis under your belt, you’ll be able to make great progress in whatever situation you find yourself. --- # VM from Custom Image with Terraform and Azure URL: https://hedge-ops.com/posts/azure-vm-from-custom-image-in-terraform/ Learn how to create virtual machines from custom images using Terraform and Azure with this step-by-step guide. On Monday, I gave you some basic tips about working with [Terraform in Azure](/posts/terraform-and-azure), and today I want to show you what I’ve learned about creating virtual machines from custom images. First of all, there are a lot of ways in which you can create your image, [Packer](https://docs.microsoft.com/en-us/azure/virtual-machines/linux/build-image-with-packer) being a great option, but I’m just going to show you the simple, manual way because I think it gives you a good idea of what’s happening. Then we’re going to build a virtual machine in Terraform from that image. ## The basic outline 1. [Create Source VM](/posts/azure-vm-from-custom-image-in-terraform#1-create-your-source-virtual-machine) 2. [Deprovision / Sysprep](/posts/azure-vm-from-custom-image-in-terraform#2-deprovision-or-sysprep-your-source-virtual-machine) 3. [Deallocate](/posts/azure-vm-from-custom-image-in-terraform#3-deallocate-your-source-virtual-machine) 4. [Generalize](/posts/azure-vm-from-custom-image-in-terraform#4-generalize-your-source-virtual-machine) 5. [Create Image](/posts/azure-vm-from-custom-image-in-terraform#5-create-your-image) 6. [Create Virtual Machine with Terraform](/posts/azure-vm-from-custom-image-in-terraform#6-terraform-it-up) ### 1. [Create your Source Virtual Machine](https://docs.microsoft.com/en-us/cli/azure/vm#create) This is totally up to you. Provision this bad boy however you want. Just know that you’re not going to actually use this machine, just use it to make an image from it. Also, this post is about creating a VM with unmanaged disks, so if you want to follow along, you’ll need to specify `--use-unmanaged-disk` in the `az vm create` command or in the Azure Portal. Default is to use managed disks, so be aware. Make sure you note what your `osdisk name` is. You’ll need this later. As of today, there is an [issue](https://github.com/hashicorp/terraform/issues/13932) with creating a VM from a user image with managed disks. I hope to write another post when the issue is resolved to tell you how to create that. ### 2. Deprovision or Sysprep your Source Virtual Machine Whether you’re making a [Linux](https://docs.microsoft.com/en-us/azure/virtual-machines/linux/capture-image) or [Windows](https://docs.microsoft.com/en-us/azure/virtual-machines/windows/capture-image) image, the steps are generally the same. For Linux, you’ll _deprovision_ your machine, and for Windows, you’ll _Sysprep_ it. _Linux:_ There’s an [Azure agent](https://docs.microsoft.com/en-us/azure/virtual-machines/windows/agent-user-guide) on your Linux called `waagent`, and we’re going to use that to deprovision that machine. Deprovisioning it means that we’re going to use that agent to delete files and data. To deprovision your _Linux_ machine, SSH to your machine. When you’re in, simply run `sudo waagent -deprovision+user`. If you don’t want to type `y`, giving it permission to continue, then you can add `-force` to avoid the confirmation step. After that, `exit` to exit your SSH session. A little note from Microsoft documentation: > Only run this command on a VM that you intend to capture as an image. It does not guarantee that the > image is cleared of all sensitive information or is suitable for redistribution. The +user parameter also removes the > last provisioned user account. If you are baking account credentials in to the VM, use -deprovision to > leave the user account in place. _Windows:_ [Sysprep](https://technet.microsoft.com/library/bb457073.aspx) gets a machine ready for be used as an image by deleting personal account information, among other things. To Sysprep your Windows machine, sign in to your Windows VM. Navigate to `%windir%\system32\sysprep` (whether it be in the Command Prompt or just in Explorer) and run `sysprep.exe`. When the _System Preparation Tool_ dialog box pops up, select _Enter System Out-of-Box Experience (OOBE)_, and make sure that the _Generalize_ check box is selected. And _Shutdown Options_ should be _Shutdown_ because we want it to shut down when it’s finished sysprepping. ![System Prep Dialog](/article_images/2017-06-21-azure-vm-from-custom-image-in-terraform/sysprepgeneral.png) ### 3. Deallocate your Source Virtual Machine Now we have to deallocate that machine. This means that we’re not only stopping the machine, but we’re also deleting its public and internal IP. When a machine is deallocated, it no longer incurs charges. To do this, you need to be _logged into your Azure account_ (`az login`). After that, you can run: ```shell az vm deallocate --resource-group --name ``` ### 4. Generalize your Source Virtual Machine Once your machine is deallocated, it’s ready to be generalized, the final step before creating your image. (If you’ve created your source virtual machine with Packer, then it has already generalized your machine, so this step is unnecessary.) ```shell az vm generalize --resource-group --name ``` ### 5. Create your Image Alas, we’re ready to create your image from which you’ll clone machines. Go ahead and run (please note, now, that _name_ refers to the image and not the VM): ```shell az image create --resource-group --name --source ``` A good note from [Microsoft](https://docs.microsoft.com/en-us/azure/virtual-machines/linux/capture-image): > The image is created in the same resource group as your source VM. You can create VMs in any > resource group within your subscription from this image. From a management perspective, you may wish to create a > specific resource group for your VM resources and images. ### 6. Terraform it up Now for the fun stuff! Okay, so we have our image sitting there in our resource group, and now we have a couple of options. If we want to use [managed disks](https://azure.microsoft.com/en-us/services/managed-disks/?v=17.23h) (after the [issue](https://github.com/hashicorp/terraform/issues/13932) is resolved), then we can use an image from one resource group and create a VM in another resource group (but still in the same subscription). For this example, as I said, though, I’m going to use unmanaged disks. [This example](https://github.com/terraform-providers/terraform-provider-azurerm/blob/master/examples/vm-from-user-image) is nice and easy to walk through because it does exactly what we’re wanting to do. Let’s take a look at the `azurerm_virtual_machine` block of the [`main.tf`](https://github.com/terraform-providers/terraform-provider-azurerm/blob/master/examples/vm-from-user-image/main.tf#L48). ```hcl resource "azurerm_virtual_machine" "vm" { name = "${var.hostname}" location = "${var.location}" resource_group_name = "${azurerm_resource_group.rg.name}" vm_size = "${var.vm_size}" network_interface_ids = ["${azurerm_network_interface.nic.id}"] storage_os_disk { name = "${var.hostname}-osdisk1" image_uri = "${var.image_uri}" vhd_uri = "https://${var.storage_account_name}.blob.core.windows.net/vhds/${var.hostname}-osdisk.vhd" os_type = "${var.os_type}" caching = "ReadWrite" create_option = "FromImage" } os_profile { computer_name = "${var.hostname}" admin_username = "${var.admin_username}" admin_password = "${var.admin_password}" } os_profile_linux_config { disable_password_authentication = false } } ``` First of all, if you’re comparing this VM block to building a [virtual machine](https://www.terraform.io/docs/providers/azurerm/r/virtual_machine.html) _not_ from an image you’ll notice that we’re missing a `storage_image_reference` block. This is neglected because the image provides that information. On the other hand, `vm_size` is required, and it must be the same size as the image. In the `storage_os_disk` block the two things we’ll look at are the `image_uri` and the `vhd_uri`. #### `image_uri` The image that you just made has a VHD, and you need the uri to this VHD. There are a couple of ways to find out the `image_uri`. First, you can simply look in the portal. You can see in the screenshot below that I have a resource group called `permanent` with an image called `customImage`. In the overview of the image resource, I can see the `Source Blob Uri` that I need. ![Azure Portal](/article_images/2017-06-21-azure-vm-from-custom-image-in-terraform/portal.png) Another way I can find that is to use Azure CLI 2.0 to find out the names of my resource group (`az group list`), storage account (`az resource list -g [ResourceGroupName] -o table`), and os disk (if you’re not using managed disks, then I don’t know of a command to find this name—create an [issue](https://github.com/anniehedgpeth/anniehedgpeth.github.io/issues) if you do, so I hope you saved it from when you created it). If I have those things, then I can build the uri like this: ```text https://.blob.core.windows.net/vhds/.vhd ``` The only change would be if you changed the default name of the vhds directory. Otherwise, this should work. #### `vhd_uri` Since this example does not have us using managed disks, we’re going to have to put our new vhd into our existing storage account. Therefore, the `storage_account_name` variable that you see there is for the _existing_ storage account in which your image’s vhd resides (the one we used for the `image_uri`). And that’s it! If you want to create a VM with managed disks, it’s not too different, but I’ll show you after that issue gets resolved. You can also check out these other examples of creating VMs from images: - [VM on a New Storage Account from a Custom Image](https://github.com/terraform-providers/terraform-provider-azurerm/blob/master/examples/vm-custom-image-new-storage-account) - [Simple Linux with Managed Disks](https://github.com/terraform-providers/terraform-provider-azurerm/blob/master/examples/vm-simple-linux-managed-disk) ## Concluding Thoughts Having some foresight into this entire process before you get started will help you along the way in creating virtual machines from this image. It will cause you to carefully consider disk type, where to store everything, resource group structure, etc. as opposed to flying by the seat of your pants (which will most likely result in you starting over— trust me). Having this high level view of the process will really simplify it for you. So I hope this helps! Happy Terraforming! > In case you missed it…here’s my post on tips for working > with [Terraform in Azure](/posts/terraform-and-azure). --- # Terraform and Azure URL: https://hedge-ops.com/posts/terraform-and-azure/ Explore the powerful combination of Terraform and Azure in this comprehensive guide. How to create, change, and improve production infrastructure using Terraform’s open-source tool with Azure. I’ve been really getting into [Terraform](https://www.terraform.io) lately and have been interested to see how well it plays with [Azure](https://www.terraform.io/docs/providers/azurerm/). I have to say, I’m pretty impressed. In fact, I’ve had a lot of fun with it. If you’re not familiar with Terraform, in their words: > Terraform enables you to safely and predictably create, change, and improve production infrastructure. It is an open > source tool that codifies APIs into declarative configuration files that can be shared amongst team members, treated > as > code, edited, reviewed, and versioned. First of all, did you know that Azure has a ton of example templates in [Terraform’s GitHub repo](https://github.com/terraform-providers/terraform-provider-azurerm/tree/master/examples)? This is a great starting point if you’ve never used Terraform before. The templates range in complexity, from a simple Linux virtual machine all the way up to creating an entire OpenShift Origin deployment. You could play around with the deployment of those templates into your Azure account and get pretty familiar with it. I do have a few tips if you’re just getting started with Terraform and Azure: ## 1) Naming Conventions It super sucks when you’re waiting on a really long build only for it to return with an error that your container name can’t have an underscore in it. That will only happen once before you find this page on [Azure Naming Conventions](https://docs.microsoft.com/en-us/azure/architecture/best-practices/naming-conventions). You’re welcome. Also, be aware of password restrictions! Certain resources require more complex passwords, so be aware that a weak password could fail your build. ## 2) Nesting Resources Sometimes you can nest resources, such as the subnet within the vnet resource block, like this: ```hcl resource "azurerm_virtual_network" "test" { name = "virtualNetwork1" resource_group_name = "${azurerm_resource_group.test.name}" address_space = ["10.0.0.0/16"] location = "West US" dns_servers = ["10.0.0.4", "10.0.0.5"] subnet { name = "subnet1" address_prefix = "10.0.1.0/24" } subnet { name = "subnet2" address_prefix = "10.0.2.0/24" } } ``` This is nice, and you feel like you’re cheating, but there are a lot of times that you have to reference the subnet ID elsewhere, like here in the NIC’s IP config, and you can’t do that if it’s nested, so you’ll have to have two separate blocks, one for VNET and one for subnet. ```hcl resource "azurerm_network_interface" "nic" { name = "nic${count.index}" location = "${var.location}" resource_group_name = "${azurerm_resource_group.rg.name}" count = 2 ip_configuration { name = "ipconfig${count.index}" subnet_id = "${azurerm_subnet.subnet.id}" private_ip_address_allocation = "Dynamic" load_balancer_backend_address_pools_ids = ["${azurerm_lb_backend_address_pool.backend_pool.id}"] load_balancer_inbound_nat_rules_ids = ["${element(azurerm_lb_nat_rule.tcp.*.id, count.index)}"] } } ``` ## 3) Graphs I’m a fan of the graphs. I think they can be helpful and even point out errors in the logic of your architecture. If you would like to create a graph of the template that you created, you may run this command from within your template’s directory: ```shell terraform graph | dot -Tpng > graph.png ``` And you’ll end up with something like this inside that directory. Kinda fun, right? ![terraform graph](https://github.com/terraform-providers/terraform-provider-azurerm/blob/master/examples/vm-custom-image-new-storage-account/graph.png?raw=true) ## 4) [Validate](https://www.terraform.io/docs/commands/validate.html) and [Format](https://www.terraform.io/docs/commands/fmt.html) You write Terraform in HCL (HashiCorp configuration language), and a cool little trick to validate that you’ve written your code properly is just to run `terraform validate` in your directory, and it’ll let you know if you’ve got any errors in that directory. Also, if you don’t want to waste your time getting all the spacing all pretty and perfect, you can just run `terraform fmt` in your directory, and it will clean up all of your spacing in that directory. ## 5) [Plan](https://www.terraform.io/docs/commands/plan.html) You’ve gotten your script just the way you want it, and now you want to see if it will work, but you don’t want to run the whole thing. Great! Run `terraform plan`, and you’ll get a lovely, formatted list of all the resources that you _plan_ to create. And if anything is incorrect in your script, then it will tell you that it doesn’t work and why. Always run a plan before applying! On a related note, there’s a lot of debate about how Terraform handles [state](https://www.terraform.io/docs/state/index.html), too, but I’m going to tackle that in another post. ## 6) [Virtual Machine Extensions](https://www.terraform.io/docs/providers/azurerm/r/virtual_machine_extension.html) So I’ve always heard (and agree with) the sentiment that you should only use Terraform for provisioning infrastructure and leave the configuration of all of those things to the tools that are good at doing configuration (i.e. Chef or Ansible). And I have tried to run many a shell script with Terraform to know that it’s a big pain. There are just countless things that can go wrong and waste a whole bunch of your time troubleshooting them (mostly access issues, IMHO). So normally you would use one of the [provisioners](https://www.terraform.io/docs/provisioners/index.html) such as `remote-exec`, `local-exec`, or `connection` using a `bastion_host`. And sometimes this works wonderfully, and other times you run into a myriad of issues concerning privileges or ssh or something. But if access issues cause the majority of those issues, giving Terraform a bad reputation for configuring infrastructure, then what if I told you that there’s something that makes configuring Azure infrastructure with Terraform easier? I give you [Virtual Machine Extensions](https://docs.microsoft.com/en-us/azure/virtual-machines/windows/extensions-features). In Microsoft’s words: > Azure virtual machine extensions are small applications that provide post-deployment configuration and automation > tasks on Azure virtual machines. For example, if a virtual machine requires software installation, antivirus > protection, or Docker configuration, a VM extension can be used to complete these tasks. Azure > > VM extensions can be run by using the Azure CLI, PowerShell, Azure Resource Manager templates, and the Azure portal. Extensions can be bundled with a new virtual machine deployment or run against any existing system. They left [Terraform](https://www.terraform.io/docs/providers/azurerm/r/virtual_machine_extension.html) off of that list, but I’m here to tell you that you can use it with Terraform, too! That means that with this resource: ```hcl resource "azurerm_virtual_machine_extension" "test" { name = "hostname" location = "West US" resource_group_name = "${azurerm_resource_group.test.name}" virtual_machine_name = "${azurerm_virtual_machine.test.name}" publisher = "Microsoft.OSTCExtensions" type = "CustomScriptForLinux" type_handler_version = "1.2" settings = <ARM Template Deployment](https://www.terraform.io/docs/providers/azurerm/r/template_deployment.html) Now hear me out! You’re used to the argument being Terraform or ARM, am I right? And the staunch ARM supporters tote that Azure’s API will always be better than Terraform’s, so if there’s a resource that they need, they don’t want to wait around for Terraform to create it, yada yada. I get it. But what if I told you that you could run an ARM template straight _from_ Terraform? Think of the leverage that would bring you! There are a LOT of ARM templates out there that you can leverage, and wouldn’t it be nice if you could just drop it straight into your Terraform script (like this [example](https://github.com/terraform-providers/terraform-provider-azurerm/blob/master/examples/encrypt-running-linux-vm/main.tf#L77)) using all of your variables? Bam! Instant access to ARM. ## Concluding Thoughts I still agree with the sentiment that Terraform should do what it does best (standing up infrastructure) and that you should use the correct tool for the job, and that Terraform isn’t the correct tool for configuration. In a lot of situations, though, letting Azure do the heavy lifting is a very valid option (re: extensions and ARM deployment). I’ve recently seen the news that Hashicorp has decided to split up each of their providers into individually distributed provider plugins because of the explosive growth of providers. That means that instead of shipping all the providers as part of the main Terraform binary, each provider will have its own plugin and therefore its own GitHub repo, like this one for [AzureRM](https://github.com/terraform-providers/terraform-provider-azurerm). I think this is great news because it means faster turnaround with bug fixes, features, etc. So stay tuned in to Hashicorp for news of the release of Terraform 0.10. I think it will mean even better things for Azure and Terraform’s synergy. --- # Study Guide for the Basic Chef Fluency Badge Exam URL: https://hedge-ops.com/posts/basic-chef-fluency/ Prepare for the Basic Chef Fluency Badge Exam with our comprehensive study guide. Gain insights from my personal experience and access a community-driven study resource. Get ready to ace the test! I took the Basic Chef Fluency test (the first one toward Chef certification) a few months ago now, so I wanted to share what I thought about it and a study guide which lot of people have asked about. ![Basic Chef Fluency Badge](/article_images/2017-06-16-basic-chef-fluency-badge/badge-basic-chef-fluency.jpg) So first off, you pay for it then schedule it, and you can take it remotely. You should sign in a little early to get set up. You’ll have access to a proctor to whom you communicate through instant message. That person will screen share with you, ensuring that all other windows are closed. They’ll also have a stream of your camera, so they’ll have you do a scan of your room, so make sure you’re in a room with no notes around (i.e. I removed my bulletin board from the wall). Another note, my screen froze up, and I panicked a little, but it unfroze and no time was given back to me. Just be aware that could happen. It sucks, but whatever. The test was pretty easy if you’re familiar with the stuff on the study guide. Obviously, right? So if you’re not familiar with the stuff in this study guide, then don’t take the test until you are. There’s not a lab portion of this test, just 40 questions and 60 minutes to answer them. The types of questions that I remember most were Kitchen and InSpec, so brush up if you need to. While there are a lot of courses out there, there’s little in the way of a simple study guide. Chef has a pdf of the things that you need to know for the exam, and when I was studying for the exam, I wrote out bullets for each of those points. And now I’m sharing them with you! I’ve created this [Basic Chef Fluency Guide](https://github.com/anniehedgpeth/chef-certification-study-guides/tree/master/basic-chef-fluency) repo, and it’s meant for community consumption, so please…consume! The [`README.md`](https://github.com/anniehedgpeth/chef-certification-study-guides/blob/master/basic-chef-fluency/README.md) has instructions on how to use the guide, but it’s pretty simple. There are three files in the guide: ## [Basic Chef Fluency Study Sheet](https://github.com/anniehedgpeth/chef-certification-study-guides/blob/master/basic-chef-fluency/basic-chef-fluency-study-sheet.md) This is meant to study the concepts outlined in the [scope PDF](https://training.chef.io/static/Basic_Chef_Fluency_Badge_Scope.pdf) by Chef. You can even print it out and mark everything with which you’re not familiar so that you can just focus on those things. ## [Basic Chef Fluency Kata](https://github.com/anniehedgpeth/chef-certification-study-guides/blob/master/basic-chef-fluency/basic-chef-fluency-kata.md) This is an exercise guide meant for daily use. This guide is most effective for those that are not currently using Chef in their daily practice or are just starting in Chef. It will give you a comfort with Chef enough for navigating the topics of the exam. In your kata, you will practice: - creating different types of resources - creating and using data bags - using [Test Kitchen](http://kitchen.ci/) - creating custom resources - bootstrapping nodes - performing searches - creating run-lists, roles, and environments - and more! ## [Basic Chef Fluency Kata Cheat Sheet](https://github.com/anniehedgpeth/chef-certification-study-guides/blob/master/basic-chef-fluency/basic-chef-fluency-study-sheet.md) This is used in conjunction with the Basic Chef Fluency Kata. If you are uncertain about how to carry out an exercise, then you can consult the cheat sheet. As I did the kata, I took notes on anything that stumped me or that I didn’t want to have to look up every time. If you have improvements that can be made to this guide, please submit a [pull request](https://github.com/anniehedgpeth/chef-certification-study-guides/pulls) or [issue](https://github.com/anniehedgpeth/chef-certification-study-guides/issues)! Also, be on the lookout for more guides for the other tests! [Local Cookbook Development](https://training.chef.io/local-cookbook-development-badge) is coming up next! Good luck on your [exam](https://training.chef.io/basic-chef-fluency-badge)! _Edited to add_: [Local Cookbook Development](https://github.com/anniehedgpeth/chef-certification-study-guides/blob/master/local-cookbook-development/local-cookbook-development-study-guide.md) is up!!!! It doesn’t contain a kata, so I’m not doing a full write-up about it—just a straight study sheet. Enjoy! --- # Chef is a Community Before It’s a Vendor URL: https://hedge-ops.com/posts/chef-community/ Explore how Chef Software Inc. values its community, encouraging collaboration and innovation. Learn how this open-source community’s approach led to the adoption of a new feature. Back in October, I was frustrated. I had invested deeply into a new feature Chef had made called [Policyfiles](/posts/policyfiles) and had seen it not be adopted. I met in an intense meeting with their product management team trying to figure out exactly how I be able to migrate off of the feature and onto the platform that everyone else was on. I was involved in a typical vendor situation where what I was doing wasn’t aligned with their direction, and I was feeling some pain because of it. Then an unexpected thing happened. The next day, at Chef Community Summit, a VP at Chef got up in front of everyone and told _us_ to suggest topics. I submitted a policyfiles topic and showed up to the session, surprised by the number of people who attended. I went through the pros and cons of the feature and the feature seemed to resonate with the audience. The same product management team I was working with the day before attended, had a great attitude, listened, were interested and engaged. It then dawned on me: working with Chef Software Inc. is indeed working with a vendor, but it’s much more than that. Their DNA came about within an open source community, and their value is tied up within a community collaborating to make their product better. So rather than shut me down, they encouraged me. Even when they disagreed with me. I was a member of their community, so I could propose whatever I wanted, and if there was a benefit to that community then they were all for it. So after the Chef Community Summit I wrote some blog posts, went on the Food Fight Show, and answered questions on Slack on the #policyfile channel. I also worked with Chef Software Inc. as a customer to make sure they understood how central policyfiles were to our workflow and success with Chef. They listened, they changed their approach, and they even let me talk about it on Wednesday at 2PM at Chef Conf. Almost three years ago, I started down a path of figuring out who we would partner with to change operations for NCR’s Hospitality products. I recognized even then that Chef, Inc. has a different kind of DNA. Yes, there is a sales organization (and they’re great). Yes there are contracts, and we pay money. But the best part of it all is that I’m a part of a larger community that is all pointed in the same direction: we want to change the fundamentals of IT and help traditional businesses make that transformation. This is a technical problem, a cultural problem, an organizational problem, and I feel so fortunate and blessed to be able to solve it with such a great community of like-minded people. --- # Chef Artifacts with Artifactory URL: https://hedge-ops.com/posts/artifactory/ Explore how Artifactory is an excellent tool for managing Chef artifacts for deployments. Learn how to install, upload, control access and integrate with Chef. Discover the benefits of Artifactory over other alternatives. If you’re going to deploy anything, you’ll eventually come across a fundamental need: you need somewhere to put your large files. At first, Chef seems like an attractive choice for this, but on deeper inspection it’s a horrible path to take. Chef is really great at delivering idempotent scripts to your machines to test and repair. It’s not that great of a file server. Storing your files in Chef will make your cookbooks more bloated, your source code repositories more bloated, and cause pain all around. So it’s been a pleasure recently to discover how great artifactory is for a tool for managing artifacts for deployments. Artifactory very naturally and easily lets you get up and running with hosting artifacts in a safe and scalable way. I’d like to lay out a bit of how we use artifactory for those interested in using it for themselves. ## Licensing First, I _really_ want to give artifactory my money, but there is a budget cycle to fend with, and besides people don’t want to spend money unless they can see the value they’re getting. So this post will be based on the _free_ version of artifactory. Fortunately, the free version contains what we need; we just need to host artifacts and call it good. Later we can get into the fancypants gem repos, supermarket, artifact expiration features. For now let’s ship it! ## Installation It was quite delightful for me to get artifactory up and running. In evaluation mode I did this with docker: ### Docker I first just pull the image: ```shell docker pull docker.bintray.io/jfrog/artifactory-oss:latest ``` And then run the container: ```shell docker run --name artifactory -d -p 8081:8081 docker.bintray.io/jfrog/artifactory-oss:latest ``` I then navigate my browser to `http://localhost:8081` and I’m immediately using artifactory. This was an excellent example of [inverted learning](/posts/docker) that Annie loves to talk about. ### Package Installation For those of us freaked out about running Docker in production, artifactory’s package installation is pretty good as well: ```shell wget https://bintray.com/jfrog/artifactory-rpms/rpm -O bintray-jfrog-artifactory-rpms.repo sudo mv bintray-jfrog-artifactory-rpms.repo /etc/yum.repos.d/ sudo yum install jfrog-artifactory-oss ``` It’s really that easy. They did a fantastic job of making it easy. ## Uploading Creating and uploading artifacts with artifactory is easy to intuit on their web UI. For CI jobs, we’ve found that the [`jfrog.exe`](https://www.jfrog.com/confluence/display/CLI/JFrog+CLI) is a really nice way of making uploads easy. This way [you can store your authentication credentials](https://www.jfrog.com/confluence/display/CLI/CLI+for+JFrog+Artifactory#CLIforJFrogArtifactory-Configuration) for uploading to artifactory on your build agents in a `~/.jfrog/jfrog-cli.conf` file. This way your usage of the `jfrog.exe` can be very simple: You can [upload](https://www.jfrog.com/confluence/display/CLI/CLI+for+JFrog+Artifactory#CLIforJFrogArtifactory-UploadingFiles): ```shell jfrog rt u *.tgz product-repo/policyfile-archives ``` And you can [download](https://www.jfrog.com/confluence/display/CLI/CLI+for+JFrog+Artifactory#CLIforJFrogArtifactory-DownloadingFiles) ```shell jfrog rt dl product-repo/policyfile-archives/webserver-3498732hjfkdlsfdahlfewlrhewkl.tgz ``` If you’re doing it right, that’s about all that you need. ## Access Control For any automation situation, you want to create access control to the assets of that automation. That way you prevent, at many levels, the situation where your scripts accidentally deploy the same product on all your nodes. If you keep access restricted, you keep things happening the way you’d say they would happen. Fortunately, artifactory allows for [users to be created](https://www.jfrog.com/confluence/display/RTF/Managing+Users) for this very purpose. So each of your products could have its own artifactory user, which would only be granted access to the repositories you say it should. ## Chef Integration Fortunately, artifactory provides a HTTP API that works very nicely with the `remote_file` resource: ```ruby remote_file 'C:\cafe\staging\chef-client-13.0.118-1-x64.msi' do source 'https://productuser:mypassw0rd@artifactory.mycompany.com/artifactory/chef-repo/chef-client-13.0.118-1-x64.msi' checksum 'c594965648e20a2339d6f33d236b4e3e22b2be6916cceb1b0f338c74378c03da' end ``` You can [create a module](https://coderanger.net/chef-tips/#3) that will build your URL for you and make it even easier: ```ruby remote_file 'C:\cafe\staging\chef-client-13.0.118-1-x64.msi' do extends ::Artifactory::UrlResolver source artifactory_url 'chef-repo/chef-client-13.0.118-1-x64.msi' checksum 'c594965648e20a2339d6f33d236b4e3e22b2be6916cceb1b0f338c74378c03da' end ``` Having a HTTPS URL is great because I can use a lot of third party Chef cookbooks that just need a URL. We have even taken it a step further and developed our own custom resource: ```ruby artifactory_file 'C:\cafe\staging\chef-client-13.0.118-1-x64.msi' do repository_path 'chef-repo/chef-client-13.0.118-1-x64.msi' checksum 'c594965648e20a2339d6f33d236b4e3e22b2be6916cceb1b0f338c74378c03da' end ``` This will automatically determine the artifactory path we use to all of our cookbooks that just want to download a file can be easier to code. ## Checksum Validation You should be checking checksums on all downloads. Fortunately the `remote_file` resource gives you a built-in way to do this. Simply add the `checksum` attribute to your resource and you have checking. That way if your files are tampered with or not what you expected, you don’t go ahead; you stop right there. That’s the _limit the damage when things go wrong_ principle at work again. This is something I learned well from my security friends. ## Conclusion Artifactory is a fantastic an essential ally to Chef in your search for DevOps nirvana. I highly recommend it over the other alternatives: Nexus by Sonatype and your own SFTP server. We’re extremely happy with this product. --- # Chef Rollback with Policyfiles URL: https://hedge-ops.com/posts/chef-rollback/ Explore the benefits of using Chef Policyfiles for rollback mechanisms in application release automation. Learn how to safely undo changes, simplify deployments, and plan for potential rollbacks. When we first looked at application release automation tools, one of the first things people told me we needed was a solid rollback mechanism. One of my colleagues even insisted without satisfying his rollback scenarios, it was silly even looking at a tool for application release automation. I can definitely understand the sentiment; when you’re doing a change and that change goes badly you really want to have a mechanism to get out of that bad situation. It would be fantastic if we had a time machine and were able to simply tell ourselves _stop_! But in lieu of that, we have to devise a plan for when we need to get out of a change we made, we are able to do so safely. ## Policyfiles Simplify Rollback Fortunately, we have the [policyfiles](/posts/policyfiles) feature at our disposal which makes _everything_ in this area so much simpler. In the classical Chef model, your rollback might be a rollback change to an environment pin, or a role, or a cookbook, or a combination of all of these. And if you, like most people in a panic, made some on the fly changes to any of these, good luck with getting out of that mess. With policyfiles, rollback of your Chef code is quite easy; you simply upload the old version of the policy to the Chef server and reconverge your nodes. That’s it. It’s virtually impossible to get yourself into a mess where you can’t somehow _remember_ what your rollback was. ## With a Defined Deployment Its Even Simpler And, now that I’ve shown you how you can do a controlled, atomic deployment with a [policyfile deployment](/posts/policyfile-deployment-with-cafe-and-psake), things get even easier! You _just_ went to Jenkins and uploaded policy `1.0.32` for your nodes related to product X. Things went south. Now go back to that same place and enter in `1.0.31` and roll out that new policy to all your nodes, safely and immediately with [cafe](/posts/introducing-cafe). ## Sometimes a Data Bag will Suffice If you’re just dealing with whether you’re going to deploy version `A` or `B` of your application, with Chef you can just store which version you’re on in a Data Bag. If your Chef code doesn’t need to change, a _rollback_ is simply an update of your Data Bag and then a convergence with cafe. I’ve found it a best practice to decouple my Chef code, wrapped in policies, with what version my application is on, stored in Data Bags. ## Code a Rollback in Critical Situations It would be silly of me to suggest merely rolling back Chef code and product code are enough to satisfy a true rollback. In some situations that isn’t sufficient. Let’s say we have a situation like this: ```text Version 1.0 website myweb exists Version 2.0 website myweb exists website newmicroservice exists ``` And let’s say you went from version `1.0` to version `2.0`. And things went south, so you rolled back. In this situation, with Chef you would still have `newmicoservice` there. So to facilitate this kind of change, you’ll want to do this: ```text Version 1.0 website myweb exists Version 1.1 website myweb exists website newmicroservice DOES NOT exist Version 2.0 website myweb exists website newmicroservice exists ``` Here you’re giving your Chef code an ability to roll back and undo stuff you plan on doing in the future. This is smart planning. I recommend it for any time a product adds new features; always add a version of the cookbook (or better yet, an attribute to a cookbook) that will turn that thing off, so if you need to roll back you can roll back safely. ## Conclusion Hopefully by now you can see that the rollback mechanisms offered by Chef Policyfiles are an excellent alternative to the coded rollback in other application release automation tools. In addition to this, you get all the fantastic elements of infrastructure as code with Chef and infrastructure testing with InSpec. The holistic approach is what gets you a full solution that will create the velocity you’re looking for. --- # Policyfile Pipeline with Jenkinsfile URL: https://hedge-ops.com/posts/policyfile-pipeline-with-jenkinsfile/ Discover how to manage Chef changes in all environments using policyfiles. Learn how to create a Jenkins pipeline for your policyfiles, ensuring secure and efficient deployment. I’m a huge proponent of [policyfiles](/posts/policyfiles) for managing Chef changes in all of your environments. Let’s talk a little about how we take a policyfile and create a pipeline in Jenkins around it to get it deployed to the right places. Many environments that aren’t as security-conscious will have a single Chef Server to rule them all, connected to a single CI server. This is the model that [Chef Workflow](https://docs.chef.io/workflow.html) assumes, and it’s a nice situation to be in. In those situations, the pipeline I lay out will be much simpler, but I still recommend following the basic pieces. Since it’s more complicated and therefore covers all the bases, we’ll go for a disconnected, releasable pipeline that can and will traverse the development to operations barrier that many security-minded organizations have. For our policyfiles pipeline, we create a similar process to our cookbooks: 1. We keep a separate `policies` git repo for each product group of policies that we have. We don’t keep the policyfiles in the cookbook. This is largely because we want to have our own pipeline for policies that is _unrelated_ to the cookbook pipeline. The cookbook pipeline will promote a cookbook to a _supermarket_, and the policy will pull the cookbook _from_ that supermarket. This creates two separate processes that have a beginning and end, but are disconnected, so allow for independence. This is a critical aspect to designing any pipeline, and one I’ll blog about in the near future. 2. We have a `rakefile` for doing tasks that can be done locally on a developer machine 3. We then put that into a pipeline with a `Jenkinsfile`. Let’s first look at the `rakefile`: ## Policyfile Rakefile ```ruby require 'rake/clean' require 'rake/packagetask' def product_name 'myproduct' end def policies FileList["#{product_name}-*.rb"] end def policies_version(build_number) "1.#{build_number}.0" end def archive_name build_number = ENV['BUILD_ID'] if build_number.nil? "#{product_name}_policies.zip" else "#{product_name}_policies_#{policies_version(build_number)}.zip" end end task :default => [:compile_policies] desc "compiles all policies" task :compile_policies do rm Dir.glob('*.lock.json') policies.each do |policyfile| sh 'chef', 'install', policyfile end end directory 'staging' CLEAN.include('staging') CLEAN.include('*.zip') desc "Exports all policies to archives and stages them in the archive folder" task :export_policies => 'staging' do policies.each do |policyfile| sh 'chef', 'export', policyfile, 'staging', '-a' end end require 'os' task :stage => [:clean, 'staging', :export_policies] do cp 'deploy.ps1', 'staging' cp 'psake.psm1', 'staging' cp 'psake.psd1', 'staging' cp 'psake.ps1', 'staging' end task :package => [:stage] do cd('staging') do if OS.windows? sh 'C:\Program Files\7-Zip\7z.exe', 'a', '-tzip', archive_name, '*.*', '-x!*.zip' else sh 'zip', '-r', archive_name, '.', '-x', '*.zip' end end end ``` Let’s unpack this a little bit. Here’s what’s going on: 1. `compile_policies` will run `chef install` against all files that have the pattern `myproduct-*.rb`. So it basically generates the `Policyfile.lock.json` for all the policies in the repo. 2. `export_policies` will export all policies to a `tgz` file with `chef export` command. 3. `stage` will stage all the things that are to be packaged into a `staging` folder including the deployment scripts written in psake (more on that in the next post). 4. `package` will package the `tgz` file and the deployment scripts into a package ## Policyfile Jenkinsfile Now that we have a rakefile that can do the work we need, now it’s time to get that into a `Jenkinsfile` to describe the pipeline. The pipeline will create a package of all policyfile archives and put them, with the script that will deploy them, on our artifactory server. Here’s an example: ```groovy #!/usr/bin/env groovy def repository = 'myproduct-policies' def workingDirectory = "policies/${repository}" // the current branch that is being built def currentBranch = env.BRANCH_NAME def execute(command){ ansiColor('xterm'){ bat command } } stage('Checkout') { node('windows') { checkout([$class: 'GitSCM', branches: scm.branches, doGenerateSubmoduleConfigurations: scm.doGenerateSubmoduleConfigurations, extensions: scm.extensions + [[$class: 'RelativeTargetDirectory', relativeTargetDir: workingDirectory], [$class: 'LocalBranch', localBranch: currentBranch]], userRemoteConfigs: scm.userRemoteConfigs ]) dir(workingDirectory) { execute('rake -t clean') } stash name: 'everything', includes: '**' } } stage('Compile') { node('windows') { unstash 'everything' dir(workingDirectory) { execute('rake -t compile_policies') try { execute('git add *.lock.json') execute("git commit -m \"Automatically Compiled Policyfiles\"") withCredentials([[$class: 'UsernamePasswordMultiBinding', credentialsId: 'abcYOUR_GUID_HERE123', usernameVariable: 'GIT_USERNAME', passwordVariable: 'GIT_PASSWORD']]) { execute("git push https://${env.GIT_USERNAME}:${env.GIT_PASSWORD}@almgit.ncr.com/scm/chef/${repository}.git ${currentBranch}") } } catch(error) { echo "Nothing to commit because of error: ${error}, so skipping pushing" } } stash name: 'compiled', includes: '**' } } stage('Package') { node('windows') { unstash 'compiled' dir(workingDirectory) { execute('rake -t package') archiveArtifacts 'staging/*.zip' } } } stage('Publish') { node('windows') { unstash 'compiled' dir(workingDirectory) { execute('jfrog.exe rt upload "staging\\\\*.zip" myproduct-repo/myproduct-policies/') } } } ``` Here is a description of all the stages: | Stage | Description | | -------- | ----------------------------------------------------------------- | | Checkout | Checks out the policies repo | | Compile | Generates all policyfile.lock.json files and checks them into git | | Package | Creates tgz files and zips them up with deployment scripts | | Publish | Publishes this all to artifactory | You can see a pattern here with the pipelines from the earlier post on [cookbook build](/posts/cookbook-development-with-rakefile) and [cookbook pipelines](/posts/cookbook-pipeline-with-jenkinsfile). They rely on script that can run locally, then end up being deployed to something that is a source of the next step in the process. More on that in the next post: how we deploy these policies to a Chef Server and reconverge the nodes. ## Conclusion Hopefully you’re starting to see the pattern I use when designing a pipeline element in my Chef Pipeline. Everything has a starting point and a destination. Every pipeline _segment_ will take a _stable_ input and put it into an _even more stable_ location at the end. It all flows together very quickly and then allows for quick changes that can flow to production. --- # Cookbook Pipeline with Jenkinsfile URL: https://hedge-ops.com/posts/cookbook-pipeline-with-jenkinsfile/ Discover how to create a cookbook pipeline using Jenkinsfile for your CI environment. This blog post provides a detailed guide on setting up Jenkins as your tool of choice for managing deployment pipelines. Now that we have a [local cookbook build](/posts/cookbook-development-with-rakefile) ready to go, it’s time to get that in a CI environment. I have been a fan of [TeamCity](https://www.jetbrains.com/teamcity) and my friends at Chef have a done a great job with [Chef Workflow in Automate](https://docs.chef.io/workflow.html). For us, however, [Jenkins](https://jenkins.io/) is our tool of choice with managing our deployment pipelines, for a few reasons: 1. Jenkins is _free_. We are able to get done what we need inside the free version, so it’s nice that we don’t have or need a license or support. 2. Jenkins is _flexible_. We have complicated requirements around security, and Jenkins has been easy to bend to those requirements without requiring a lot of fuss. 3. Jenkins is _friendly to a pipeline mindset_. Compared to TeamCity, Jenkins is much better at laying out a workflow and walking through the various stages of that workflow, defined in a single file. 4. Jenkins is _recommended by expensive consultants_. In a large enterprise that’s important. If you go with a tool that the high-powered consultants don’t put on a _here’s what people are doing_ list, you end up fighting an uphill battle. Choose those battles wisely; you’ll likely lose them unless you have a _very_ compelling use case. So now that we’ve decided on Jenkins as our CI of choice, let’s talk about how we would implement that. ## Jenkinsfile Example First, in your cookbook repository in git you would have a `Jenkinsfile`. Ours looks like this (just scroll down if you don’t care; it’s ok): ```groovy #!/usr/bin/env groovy // COOKBOOK BUILD SETTINGS // name of this cookbook def cookbook = 'cafe' // SUPERMARKET SETTINGS // the branch that should be promoted to supermarket def stableBranch = 'master' // the current branch that is being built def currentBranch = env.BRANCH_NAME // OTHER (Unchanged) // the checkout directory for the cookbook; usually not changed def cookbookDirectory = "cookbooks/${cookbook}" // Everything below should not change unless you have a good reason :slightly_smiling_face: def building_pull_request = env.pullRequestId != null def notify_stash(building_pull_request){ if(building_pull_request){ step([$class: 'StashNotifier', commitSha1: "${env.sourceCommitHash}"]) } } def execute(command){ ansiColor('xterm'){ bat command } } def rake(command) { execute("chef exec rake -t ${command}") } def fetch(scm, cookbookDirectory, currentBranch){ checkout([$class: 'GitSCM', branches: scm.branches, doGenerateSubmoduleConfigurations: scm.doGenerateSubmoduleConfigurations, extensions: scm.extensions + [ [$class: 'RelativeTargetDirectory',relativeTargetDir: cookbookDirectory], [$class: 'CleanBeforeCheckout'], [$class: 'LocalBranch', localBranch: currentBranch] ], userRemoteConfigs: scm.userRemoteConfigs ]) } stage('Lint') { node('windows') { notify_stash(building_pull_request) echo "cookbook: ${cookbook}" echo "current branch: ${currentBranch}" echo "checkout directory: ${cookbookDirectory}" try{ fetch(scm, cookbookDirectory, currentBranch) dir(cookbookDirectory){ // clean out any old artifacts from the cookbook directory including the berksfile.lock file rake('clean') } dir(cookbookDirectory) { try { rake('style') } finally { step([$class: 'CheckStylePublisher', canComputeNew: false, defaultEncoding: '', healthy: '', pattern: '_/reports/xml/checkstyle-result.xml', unHealthy: '']) } } currentBuild.result = 'SUCCESS' } catch(err){ currentBuild.result = 'FAILED' notify_stash(building_pull_request) throw err } } } stage('Unit Test'){ node('windows') { try { fetch(scm, cookbookDirectory, currentBranch) dir(cookbookDirectory) { rake('test:berks_install') rake('test:unit') currentBuild.result = 'SUCCESS' } } catch(err){ currentBuild.result = 'FAILED' notify_stash(building_pull_request) throw err } finally { junit allowEmptyResults: true, testResults: '_/rspec.xml' } } } stage('Functional (Kitchen)') { node('kitchen') { try{ fetch(scm, cookbookDirectory, currentBranch) dir(cookbookDirectory) { rake('test:kitchen:all') } currentBuild.result = 'SUCCESS' } catch(err){ currentBuild.result = 'FAILED' } finally { notify_stash(building_pull_request) dir(cookbookDirectory) { rake('test:kitchen:destroy') } } } } if (currentBranch == stableBranch){ lock(cookbook){ stage ('Promote to Supermarket') { node('kitchen'){ fetch(scm, cookbookDirectory, currentBranch) dir(cookbookDirectory) { execute "git branch --set-upstream ${currentBranch} origin/${currentBranch}" rake('release') } } } } } ``` You can see here that the `Jenkinsfile` is acting more like an integration point to the `rakefile`. That’s how we like it; we want as much as possible to be reproducible locally. Then we walk through the stages and do the things. Here is a more detailed explanation of the stages: | Stage | Description | | ---------------------- | -------------------------------------------------- | | Lint | Checks that the code meets our guidelines | | Unit Test | Runs Chef unit tests on the cookbook, if any exist | | Functional (Kitchen) | Runs test kitchen against all suites | | Promote to Supermarket | Promotes the cookbook to an internal supermarket | This provides a very simple way for cookbooks to go from a checkin to the supermarket. ### Setting this up in Jenkins In Jenkins, we create two builds: 1. A [pipeline](https://jenkins.io/doc/book/pipeline/) build that builds off of `master`. Notice that we _don’t_ use the multi-branch pipeline build at the moment, because we were having quality issues with that feature in Jenkins and wanted to test our pull requests. 2. A [pull request](https://wiki.jenkins-ci.org/display/JENKINS/Stash+pullrequest+builder+plugin) builder that tests pull requests in our local [bitbucket](https://www.atlassian.com/software/bitbucket) server. The pull requests inside bitbucket are set to not allow acceptance without a passing build, so this keeps our `master` branch clean and ready to go. Just in case, the `master` build will build everything before sending the cookbook off to the supermarket. You’ll also notice that the `Jenkinsfile` has a lot of `try/catch` logic in it. This is so the `Jenkinsfile` can notify the pull request verifier that a build failed, and that message will show up inside the pull request. So you get some complexity here, but great benefit with having nice integration with your pull request workflow. Once pull requests are solid, it’s now time to lock down your master branch. Don’t let a lot of people commit directly to it; instead have them submit pull requests. This follows the normal open source model that products like Chef use, and you’ll find that it works very well. ## Conclusion With a solid cookbook build in place and a CI process, things start to get regularly tested and quality goes up. I had to be persuaded by my colleagues to go the pull request verifier route, but now that I have, I see what they were trying to tell me: pull requests get tested, master is solid, and your speed of delivery goes up. Maybe one day the Jenkins blue ocean project will catch up to Bitbucket integration, but until then, this works pretty nice for us. I’d like to also thank and credit my colleagues [John Kerry](http://kerryhouse.net/) and Richard Godbee for leading me in this direction. They spent a ton of time helping me understand how to make a good workflow in Jenkins, and the outline above would not be possible if it weren’t for their help. --- # Cookbook Development with Rakefiles URL: https://hedge-ops.com/posts/cookbook-development-with-rakefile/ Explore the process of cookbook development with Rakefiles. Learn how to standardize cookbook quality, implement automated testing, and release cookbooks efficiently. Ideal for Ruby developers. When we [started Chef](/posts/my-advice-for-chef-in-large-corporations), we had a loose set of rules for everyone to follow and sent them on their way. We quickly realized, however, that we needed to standardize how a cookbook met quality standards before it got released. We would try to make a simple change to a cookbook, and it didn’t meet our coding standards. Or they forgot to [introduce kitchen](/posts/test-kitchen-required-not-optional). Or they remembered, but they didn’t do anything when their kitchen broke three weeks ago. It was chaos. Essentially our cookbooks are like any other code product: they need a build process, automated testing, and a way to release them to the outside world. Without that, you’ll have chaos and doom. The best way I know of to do this is with `rake` ( see [this example](https://github.com/mhedgpeth/cafe-cookbook/blob/master/Rakefile) on my `cafe` cookbook). `rake` has several advantages: 1. It’s all in one file, using a common framework that other Ruby developers use 2. It easily integrates within a Chef environment using the `chef exec` commands 3. It integrates well into any existing pipeline or CI server Its one disadvantage is that it can be difficult for non-ruby developers to understand, _however_, the benefits above far outweighs this advantage. We’ve found that with the simple `rakefile` below, most people don’t even have to touch their rakefile and can just use it. We use the same `rakefile` for every cookbook, located in the base folder of the cookbook in a dedicated git repository for that cookbook. Here’s an example: ```ruby task default: [:clean, :style, :test] desc 'Removes any policy lock files present, berks lockfile, etc.' task :clean do %w( Berksfile.lock .bundle .cache coverage Gemfile.lock .kitchen metadata.json vendor policies/*.lock.json commit.txt rspec.xml ).each { |f| FileUtils.rm_rf(Dir.glob(f)) } end desc 'Run foodcritic and cookstyle on this cookbook' task style: 'style:all' namespace :style do # Cookstyle begin require 'cookstyle' require 'rubocop/rake_task' RuboCop::RakeTask.new(:cookstyle) do |task| # If we are in CI mode then add formatter options task.options.concat %w( --require rubocop/formatter/checkstyle_formatter --format RuboCop::Formatter::CheckstyleFormatter -o reports/xml/checkstyle-result.xml ) if ENV['CI'] end rescue puts ">>> Gem load error: #{e}, omitting style:cookstyle" unless ENV['CI'] end # Load foodcritic begin require 'foodcritic' desc 'Run foodcritic style checks' FoodCritic::Rake::LintTask.new(:foodcritic) do |task| task.options = { fail_tags: ['any'], progress: true, } end rescue LoadError puts ">>> Gem load error: #{e}, omitting style:foodcritic" unless ENV['CI'] end task all: [:cookstyle, :foodcritic] end desc 'Run unit and functional tests' task test: 'test:all' namespace :test do begin require 'rspec/core/rake_task' desc 'Run ChefSpec unit tests' RSpec::Core::RakeTask.new(:unit) do |t| t.rspec_opts = ENV['CI'] ? '--format RspecJunitFormatter --out rspec.xml' : '--color --format progress' t.pattern = 'test/unit/_{,/*/_}/*_spec.rb' end rescue puts ">>> Gem load error: #{e}, omitting tests:unit" unless ENV['CI'] end begin require 'kitchen/rake_tasks' desc 'Run kitchen integration tests' Kitchen::RakeTasks.new rescue StandardError => e puts ">>> Kitchen error: #{e}, omitting #{task.name}" unless ENV['CI'] end namespace :kitchen do desc 'Destroys all active kitchen resources' task :destroy do sh 'kitchen destroy' end end task all: ['test:unit', 'test:kitchen:all'] end desc 'bumps the patch version and releases the cookbook to the supermarket' task release: 'release:all' namespace :release do begin require 'bump' require 'bump/tasks' desc 'tags and pushes a patch change' task tag: ['release:bump:patch'] do sh 'git pull' sh 'git push' end rescue puts ">>> Gem load error: #{e}, omitting release:bump*" unless ENV['CI'] end begin require 'stove/rake_task' Stove::RakeTask.new rescue puts ">>> Gem load error: #{e}, omitting operational:tag" unless ENV['CI'] end task all: ['release:tag', 'release:publish'] end ``` As I’ve said before, you don’t really need to be able to understand every line of this `rakefile` in order to make good use of it. So let’s get up to speed on that part: ## Setup Before running the `rakefile` you’ll need to set up some gems: ```shell chef gem install stove bump ``` These gems are used for uploading to a supermarket and bumping a version, respectively. More on that below. ## Running Locally | Function | Description | Command | | -------- | ----------------------------------------- | ------------------------- | | Lint | Ensures that code meets standards | `chef exec rake -t style` | | Test | Ensures that code runs and is ready to go | `c`hef exec rake -t test` | We run our `rake` within the Chef ruby environment, so we prepend it with `chef exec` which says -run this with Chef’s built-in ruby\_. That makes everything much more consistent and easy, especially considering we’re using cookstyle and kitchen gems here. To run your linting, just run `chef exec rake -t style`. This will run _both_ [cookstyle](https://github.com/chef/cookstyle) and [foodcritic](http://www.foodcritic.io/) on your cookbooks. We’ve found both linting tools to be helpful. Cookstyle is a saner wrapper around rubocop. Another great pro-tip on using `cookstyle` is that you can automatically fix easy to fix errors by running `cookstyle -a`. That saves a ton of time. Once you get past the linting phase, you can run unit tests and kitchen with `chef exec rake -t test`. We consider [test kitchen](http://kitchen.ci/) to be an [absolutely critical](/posts/test-kitchen-required-not-optional) aspect of our coding process. Would you ever write code that you never ran before deploying it somewhere? If you’re not using test kitchen, that’s exactly what you’re doing! This `rakefile` will also allow you to bump your versions automatically (`release:bump:patch`) and upload to a supermarket (`release:publish`). You’ll need the `stove` and `bump` gems installed with `chef gem install stove bump`. Also, you’ll need to add a `.stove` file to house the configuration of how to talk to the supermarket, with these contents: ```json { "username": "yourusernametosupermarket", "key": "C:/Users/yourusername/.chef/yourusername.pem", "no-git": "true", "endpoint": "https://supermarket.yourcompany.com" } ``` The bump and publish targets should be reserved for your CI agent most of the time. ## Running with CI When you run this with a CI server, you’ll need set the `CI` environment variable to `true` so your tests will report the _CI_ way. Then simply run the targets as you need. I’ll have a version of our `Jenkinsfile` in the next post. ## Why not Delivery though? My friend Matt Stratton [suggests](https://www.mattstratton.com/post/getting-started-with-chef/) using Chef Delivery cookbooks to do this same thing. We didn’t go in this direction for a few reasons: 1. _Ignorance_: we don’t know delivery very well and there isn’t a community around it that can get a local build up and running quickly. Most of delivery seems to be centered around getting Chef Workflow to work, which is not something we had plans to do. 2. _Training_: more people know rake than know delivery. So rake is the easier option 3. _Simplicity_: while rake does leave you a bit confused as to the particulars of what you’re doing, it’s all in one file and can be easily run. The delivery stuff is in a hierarchy of folders and therefore takes more to understand. ## Conclusion Having a local cookbook build that is standard in all of our projects has become essential to our implementation of Chef at scale. I think the `Rakefile` I use above is an excellent choice for standardizing in a way that is both flexible and simple. --- # DevOps Leadership Summit in Dallas URL: https://hedge-ops.com/posts/devops-leadership-summit/ Join us at the DevOps Leadership Summit in Dallas on August 28, 2017. Network with industry leaders and learn how to facilitate a DevOps transformation in your company. Last year when we organized the first DevOps Days DFW, many attendees told us that they thought the content was fantastic, but that they wished they could have something that they could invite their leadership team to. Since [alignment](/posts/finding-alignment) is so critical to changing anything in a large organization, the DevOps Days DFW Organizers agree and are pleased to announce a _DevOps Leadership Summit_ on Monday, August 28, 2017, from 8:00 to 1:30 at the [Capital One Plano Conference Center](https://goo.gl/maps/GSqXneQtXAL2). We’re pleased to have [Nicole Forsgren](http://nicolefv.com/) of DORA, [Adam Jacob](https://www.linkedin.com/in/adamjacob/) from Chef Software, and [John Willis](https://www.linkedin.com/in/johnwillisatlanta/) from Docker join us for the event. We will have a couple of talks, a workshop, and an interactive networking lunch where attendees can interact directly with our speakers. Attendance to the event is limited to 50 people and is by invitation only through nominations. The ideal attendee is a Director, VP, or C-level executive who is or is interested in facilitating a DevOps transformation in his or her company. To nominate yourself or someone else, let us know how this person fits that criteria via email. The cost of the event is covered by a ticket to DevOps Days DFW 2017, which can be purchased [here](https://www.eventbrite.com/e/devopsdays-dfw-2017-tickets-33482024637). We don’t expect everyone attending the Leadership event to attend the entire DevOps Days. In fact, it would be just fine if they came to the keynotes in the morning and that’s it. If you know anyone who would benefit from coming to our Leadership Summit, please let us know! --- # Which Line? URL: https://hedge-ops.com/posts/which-line/ Explore the impact of software development on a company’s bottom line versus top line. How I navigated from a cost-saving role to a revenue-generating position. I started my career as a developer at a worker’s compensation case management company. We wrote software that nurses, who were employees, used to manage worker’s compensation cases in hopes that we would get people back to work quicker and thus lower costs. I had a talented and opinionated boss who gave me a lot of room to learn and grow. It was a great first job. After a couple of years, however, it became clear that the software I was working on wasn’t seen as increasing the revenue of the company but decreasing its costs. Within this business model, the key _operational_ revenue was coming from the nurses who were managing the cases. I was making the nurses more efficient, but I wasn’t _directly_ contributing to revenue. In business terms, I was not contributing to the _top line_, I was contributing to the _bottom line_. For those of us who don’t handle P&L every day, here’s what that means: ```text Insurance Companies pay for Case Management (Revenue) - TOP LINE SUBTRACT Cost of Nurses to Manage Cases (Cost) MY SOFTWARE Lowers the Amount of Nurses We need (Lower Cost) Company Investors get money left over (Profit) - BOTTOM LINE ``` So in this job, I was contributing to the bottom line by lowering the cost of operating the business. This is wonderful, but I quickly realized the downsides to this: 1. _There is a ceiling to the financial impact I can have to the company_. Within this model, I can only add value to the level that the operations cost. And the further down the road I get, the less efficiency I can extract out of this system. For example, if in the first year it costs $5M to run this group and I make software changes that mean we only need to spend $4M on nurses, I made a great impact. But in the next year, I’ll need to…reduce costs by another $1M? At some point the efficiency runs against the core business model. 2. _The rewards are going to the top line_. Who are the people getting the large bonuses and promotions? The people who are closest to the top line revenue. That is an unfair fact of business but a very real one. So I saw myself with fewer options than my peers within other business models. I’ve often wondered why this is, and the closest I can get to is the previous reason: if you have a rockstar sales person, they could _create_ millions of dollars of revenue for the company by creating a new deal. For those dealing with the bottom line, the upside is limited to costs. I quickly came the conclusion that I wanted to work at a software company and got a job at Radiant Systems (which later was acquired by NCR). At NCR there is a different equation: ```text Restaurants pay for My Software (Revenue)—Top Line Subtract cost of support, services, operations Company investors get money left over (Profit)—Bottom Line ``` Within this model, my software is what the company is selling. This changes everything. The upside to my work is unlimited. If I work with sales to get a feature in the software, that could move the needle tremendously for the company. For years, I enjoyed the career benefits to being aligned with revenue at a business. And then I moved to a Cloud Engineering role. A strange thing happened at that point. In people’s minds, I was _moved_ into the cost side of the business. Therefore, I was making the cost of the business more efficient but also was missing out on the upside. In the revenue-aligned positions that I enjoyed in the past, I was rewarded for innovation because that meant there would be upside. But in the operations world, I fought a perception that innovation would disrupt the efficiencies already gained and take us (and our careers) backwards. In my mind, Cloud Engineering is the biggest revenue driver opportunity that I’ve ever seen. If we can find a way to ship ideas and features more quickly, then by definition our revenue will increase. It’s a force multiplier for development. However, I’m guessing that most businesses out there will not see it this way. Most businesses see DevOps through an efficiency lens, because efficiency is all IT Operations has been about since it began. Last night a friend of mine was talking about this with me, and he said that he worked a DevOps job for six months writing Chef scripts, getting paid well, for an application with few to no users. He ended up having very little financial impact on that company. Then he pivoted to a development role where he increased his company’s revenue by 1/3 in a year. In the latter role, he used Cloud Engineering principles to create a CD pipeline, etc., but he also developed the capability to fundamentally change his business. What an exciting story. What’s interesting to consider though: let’s say I split my friend up into the rockstar Developer and rockstar Cloud Engineering person. They work together to create this same outcome. At the end of the day, who will the company reward and nurture more? I’m thinking the Developer. If that’s the case, then those of us headed down the pure DevOps path might want to consider the limitations we are imposing on ourselves. I would love to hear your thoughts. --- # How to Lower the Barrier to Entry URL: https://hedge-ops.com/posts/barriers/ Discover how to lower the barrier to entry into technology careers. Learn strategies to support and encourage individuals transitioning into tech and foster diverse, innovative teams. I realize my journey into technology isn’t _normal_, per se. Many times people ask me how they can do it, too, and they note how I had advantages that they don’t, so they succumb. Meanwhile, many people in technology note the need to lower the barrier to entry into careers in technology to better foster [teams with a variety](http://www.diversitas.co.nz/Portals/25/Docs/Diversity%20Matters.pdf) of thoughts and backgrounds. Research says that this leads to greater business outcomes, and we believe it and want to support it. A lot of times, then, when we see these people (like myself) who have or are in the process of transitioning into a career in technology, we applaud them for their valiant efforts and recognize that even though there were things that lowered that entry-point, their efforts were noteworthy nonetheless, and we go about our lives and move on to the tasks at hand of solving complicated technology issues. This assumes, of course, that there is just one hurdle to get over that barrier and after that you are free to live out your years in knowledgeable, technological savvy and bliss. This, of course, is absurd. We all know that this person will have an uphill climb for quite some time, requiring a massive amount of grit and determination to persist. This blog post, however, is for you, the already-technologically-savvy, because don’t you want someone with that much grit and determination to overcome odds and learn technology totally from scratch to be on your team? We’re not talking about opening the floodgates and letting just anybody on our teams. We’re talking about allowing in those who have proven their ability to learn and push through hard problems to work alongside us. If you want to encourage more people to take the jump into technology, then you sadly can’t take a backseat. There are some things which require your engagement in order to see this come to fruition. I have some ways in which you can 1) lower the initial barrier to entry and 2) ensure continued success after the hurdle of that entry point is crossed. Let’s assume you have someone in mind that you want to encourage in their journey into technology. Here’s my advice to you, whether you be their friend that will walk alongside them, their potential employer, their colleague, or their mentor. 1. _Convince them to give you three weeks._ Surely you have a problem that needs solving with which this person can experiment. With me, it was [InSpec](http://inspec.io/tutorials/). [My husband](/about/michael) wanted to see if I could prove that InSpec was approachable to non-developers. I totally wasn’t feeling it and thought he was crazy. He convinced me to give it three weeks. I couldn’t fail. If I was able to learn it, then I proved their assumption correct. If I couldn’t learn it, then it was valuable information to both him and the authors of the framework. The rest is history. ![Ghent Classroom](/article_images/2017-04-01-barriers/barriers2.jpg) 2. _Lend your privilege._ If you’re reading this, then chances are that you have some sort of privilege. [Anjuan Simmons](http://www.anjuansimmons.com/) speaks about [lending your privilege](http://www.anjuansimmons.com/my-talks/lendingprivilege) to those that could benefit from your platform. I would not have had the opportunities that I’ve been given without so many people generously loaning me their privilege, the most of whom is my husband. He loaned me his knowledge, his network, his time and energy, and his mind. So inspired by the generosity of others, I’m seeking to loan my privilege as I can, too. 3. _Do experiments to learn what motivates your apprentice._ This might require some trial and error and experimentation. I did a series of experiments to see if I was more of a developer or more of an operations person. Turns out that I have more of a developer mindset, but how would I have known that without discovery? From there, we were able to chart a course of learning, starting very small and building. 4. _Create a sense of urgency._ Once you have a course charted, then motivation is going to ebb and flow. The only thing that is going to get this apprentice of yours through those valleys is a sense of urgency. For me that meant self-imposed deadlines and goals, public accountability with Twitter and a blog, and a weekly meeting with [someone](https://twitter.com/chri_hartmann) that I didn’t want to let down who was loaning me his privilege. ![InSpec Onboarding Meeting Request Details](/article_images/2017-04-01-barriers/lendingprivilege.png) 5. _Discover inverted learning._ The most overwhelming thing for a person new to technology is the idea that they have to know _everything_. When I learned InSpec, I didn’t know what I was testing. As an infrastructure auditing framework, I knew that I was testing to see if the infrastructure was the way it was supposed to be, but I didn’t really know what those things meant. I had to dig in to learn that. When I had learned InSpec enough to move on to add Chef to my repertoire, I was able to remediate the failures with a cookbook, furthering my learning of what those audits actually meant. Mind you, I was doing everything with oversight. I don’t propose that you create a bunch of code-monkeys who simply create code that they don’t understand, but rather use the code to teach, grow, and develop thinkers. ![Describe block in InSpec](/article_images/2017-04-01-barriers/shouldbeinspec.png) 6. _Don’t rush it!_ Imagine your student’s learning as a kanban board with columns _A_ (skills they have yet to learn), _B_ (skills they are currently learning), and _C_ (skills they’ve mastered) with the goal of a good flow through the board. As [Kathy Sierra](http://www.oreilly.com/pub/expert/kathysierra) notes in her [highly recommended talk](https://www.youtube.com/watch?v=FKTxC9pl-WM), we cannot bypass B! If we either pile up too many skills into the _B_ board or we try to rush them through _B_, then we will end up with a bunch of half-learned skills that will likely be lost. This leads to discouragement, frustration, failure, and a higher likelihood of giving up. Your student is best served with patience, given the allowance to master skills at their own pace. ![Learning Workflow - Kathy Sierra](/article_images/2017-04-01-barriers/kathysierra.png) 7. _Allow them to specialize in a skill._ Once they master a skill, they have the ability to practice it with confidence, advise others on it, and add value! If you’ve invested in the right type of person, then they desire more than anything to add value, and when you allow them to add value, you grow the confidence that will propel them to learn and master even more. 8. _Discover what your team is lacking._ From what type of minds could your team benefit? What type of problem-solving is your team lacking? In what type of person do you want to invest? Where can you find this person? Can you make the bandwidth to invest properly in someone like this? Can you afford not to? 9. _Set them up for success with mentorship._ Your apprentice’s uphill climb will last for quite some time. They will need mentorship focused on growth and development. Start out being very explicit in what is expected of them. Very slowly, be less and less explicit. Protect them from failure and set them up for many wins, big and small alike. Their confidence is key in this first year of transition. Without it, their grit and fortitude will not last. I hope this propels you to be an integral part of someone’s story. They need you, and you can be a game-changer for them. --- # Why Habitat? URL: https://hedge-ops.com/posts/why-habitat/ Explore the journey of a software engineer discovering the benefits of Habitat for application deployment. Learn about the pros and cons of various application automation approaches, and why Habitat stands out as a game-changer in the field. I started my career as a software engineer, and I always love creating a new application and seeing the magic of that application being deployed to production. I love seeing the excitement on our user’s faces when we talk about all the cool stuff we’re working on. And I love making that _real_ for people. Over the years, I’ve become increasingly aware of the gulf that exists between making something real on my own machine as a developer and making something real for a user of my software who is experiencing an ROI for my work. That frustration led me to tackle the problem of how to better deploy an application into production. I’ve found [Habitat](https://www.habitat.sh/) to be a compelling but often misunderstood new option within this space. In this post, I’ll describe the pros and cons of other application deployment technologies and then at the end talk about what makes me so excited about Habitat. Here are the various approaches to application automation, from the simplest to the most complex: ## Scripted In the past when we created new applications, most of us did an initial demo or deployment by running through a list of items someone needs to do to run an application in an environment that a developer didn’t build. There are files to be copied somewhere, commands to run, and validation to ensure that the application is running. In our starting scenario, people do this work manually or with custom-built scripts, which become more and more complex over time. The problem with the _manual_ or _scripted_ way is that solutions end up being bespoke per application, and thus poorly maintainable. There also isn’t a really great way to know whether the script was successful. Many scripting languages will just return a code in the middle, leave the system in an unhealthy state, and just kind of shrug when failures occur. Also, you usually won’t do the scripted way in _all_ environments; just your production environment. This will create unintended surprises that lead to more brittle deployments and longer lead times to get deployments out. If you’re using a manual or bash/powershell scripted way to deploy applications, I highly recommend you consider a better mechanism defined below. ## Packaged The next obvious solution to this problem of how to get your application in production is to package the application and its files with scripts that will deploy it. This is what we considered when we evaluated [XL Deploy](https://xebialabs.com/products/xl-deploy/). Also, in a Windows-only world one could use [Chocolatey](https://chocolatey.org/) for this purpose. These tools really shine when deployment of a package is relatively simple and isolated. I love and use Chocolatey for third party applications, like installing ChefDK or even chrome on a new machine. The package mechanism also allows you to promote a single package through multiple environments, thus ensuring that you have better quality when you go to production. The packaged mechanism is almost always a better model than the pure scripted mechanism mentioned above. However, we decided not to go with this way to deploy applications because we wanted a more holistic model for how to manage the _entire_ machine that the application needed. For example, if we had an IIS machine, it’s just as important that IIS is set up properly as it is that the website files exist with an IIS website set up. If we ignore the former, there is no value in the latter. So for complex applications, I don’t recommend using a packaged mechanism for application deployment. I do recommend using the packaged mechanism for third party applications (and on windows, use Chocolatey), but limit its usage to isolated third party applications. ## Configuration Management Up until recently, if one were to want to take a more holistic approach to application deployment automation, the best choice was to use a configuration management tool like [Chef](https://www.chef.io/). This has several advantages. First, with Chef you get a holistic machine level environment within which your application will run. So with our IIS example, you get a _configured_ IIS Server there upon which your application will run. You can use [Test Kitchen](/posts/test-kitchen-required-not-optional) to ensure that the entire machine will run, so you have a much better ability to test that your deployment code works early in the process. And integration testing with other third party applications is natural as well; if you have a problem with running an APM or Security tool with your application, you’ll find those problems more easily while using a configuration management approach to application automation, because you’ll more naturally be able to include all the machine dependencies into a coded, trackable artifact like a [Policyfile](/posts/policyfiles). This is ultimately the path we took two and a half years ago, and I’m glad we did. The holistic approach has proven to be more difficult to execute than a simple scripted or package-based mechanism, but it also gives us consistency, which gives us higher uptime and [ability to scale](/posts/our-superbowl). This approach has its drawbacks. First, it has been difficult to get our developers and QA staff to really embrace Chef for _their_ environments. They can’t just take a _Chef_ package and _run_ it on a developer or QA machine for feature testing. They probably need an entire separate machine there. And they probably want it to be connected to a Chef Server. All of this overhead makes it difficult or impossible for a developer to want to use the Chef deployment mechanism locally. When we get to a shared, stable QA environment or UAT environment, it’s fine. But for a QA person trying to test an app on a private local machine, Chef isn’t a very good natural choice. The second drawback to this approach is the distinct difference that exists between a promise-based configuration management system’s capabilities and the workflow-oriented approach that exists in a typical deployment. With deployment, you’re talking about steps, like “first I upgrade the database, then I update the files, then I turn the websites on and add them back to the LB.” A _desired state_ DSL like Chef, Puppet, or DSC is not a very natural way to express this. We’ve gotten around it with Chef and can get by, but the unnatural expression of workflow within a promise-based DSL has slowed down our adoption of Chef. For example, if I’m doing a workflow based deployment, it looks something like this: 1. Stop all services 2. Upgrade database 3. Copy files to the right locations 4. Start all services That’s relatively simple, and how most people think of an orchestrated deployment. With Chef, however, it becomes quite difficult (and this is only on one machine): 1. download the artifact file, notify 2, 3, 4 to run 2. service action: stopped (only if notified) 3. execute script ‘upgrade database’ (only if notified) 4. copy all files (only if notified) 5. service action: started (every time) If you don’t know Chef, you’re probably thoroughly confused, and that’s the point. Chef is just not very good at executing a workflow like this. You might say that Ansible or Puppet Orchestration are better at it, but the reality is that you’re still using a promise language to express a workflow problem. You’ve found a hammer, and now you think everything is a nail. Is there a better way? Perhaps. ## Containers A lot of people I meet view Docker as a lightweight virtual machine runtime mechanism. They think that Docker’s main benefits are faster uptime and lower resource consumption. Recently [Wes Higbee](http://www.weshigbee.com/) helped me understand the true benefits of Docker and how they relate to application deployment automation. Docker, at the surface, allows you to have the best of both worlds between packaging and configuration management above. You can create a Docker container that _contains_ all the dependencies that the application needs to run into a container, and then ship that container to run on any linux environment you want. So you now have a single file, that is itself _not_ a script, it’s a package that is _immediately_ ready to run. This changes everything. I no longer have to worry that IIS is set up incorrectly. If I’m using a Docker container for Windows, I can use the `microsoft\iis` image and build on top of it to create a fully encapsulated running application that can run on any Windows Server 2016 host. On top of that, I get a more lightweight runtime, so I can take this image and very rapidly auto-scale during peak consumption times. So in the past, I had to spin up a new server, and even if it was fully automated, I would have to wait a few minutes for that to be built. With Docker, I can have that server up in seconds, consuming less resources and therefore operating at a fraction of the cost, and then kill the server when the peak consumption time is over. Usually I’ll run a scheduler like [Kubernetes](https://kubernetes.io/), [Open Shift](https://www.openshift.com), or [Mesosphere](https://mesosphere.com/) to _schedule_ when machines are going in and out of operations and how upgrades occur. When you fully grasp what containers can bring to the table for application runtime isolation and scale, it’s very easy to get caught up in the excitement of what the future can bring. However, as I think about it more, my excitement has been tempered a bit. Containers are a powerful tool that can do both great good and great harm to your business. Let’s consider a few risks: First, scheduled containers as mentioned above rely on an immutable infrastructure to work. In other words, if you are used to logging into a machine to look at anything, or making any manual changes at all, you’re not ready for containers. I often say to people that Packaging/Scripting is like playing Junior Varsity football, Configuration Management is like playing College Football, and Containers are like the NFL. If you’re still in JV football, you’re not going to get very far with the NFL equivalent. Yes, other companies have done it. But those companies also don’t SSH into their servers to make changes. Do you? If so you’re not ready for this. Work on becoming more mature in your processes, and then revisit it, perhaps. A second problem with this containers approach, is that you end up isolating the application itself, which is wonderful, but you replace that isolation with an essential _scheduler_ component that is itself complicated and therefore prone to error. In other words, your developer may say “hey, my docker image works, what’s the problem?” and at that level there is no problem. But at the scheduler level, there might be an orchestration problem, or runtime problem. You solved your isolation problem, in effect, by replacing it with another tool that few people understand and that developers are likely not going to run themselves. Instead of having the desired effect of making deployment _simpler_, it actually makes deployment more difficult, by introducing a runtime environment that allows little interactivity and troubleshooting. The final problem I have with containers is the latent issues of including a full stack of linux in the container image itself. I get this warning from [Julian Dunn’s blog](http://www.juliandunn.net/2015/12/04/the-oncoming-train-of-enterprise-container-deployments/). Julian has some great points, and if you have time, read his post on the topic. The risk here is that if you include the current version of ubuntu in your docker image, and that version has security vulnerabilities inside of it that are discovered a decade from now, it’s going to be difficult to change/update those images. In Docker, an image is immutable, so you need to have a pipeline to build a new image set up if you’re doing it right. Which leads to the question: are your Docker images in production built within a continuous delivery pipeline? Are you prepared for that pipeline to be fully functional for the container’s lifespan, which could span decades? For most enterprises I’ve interacted with, it would be a huge step to go from where they are to a fully functional and operational CD pipeline. And on the startup side, do we _really_ trust that they will take the time to deploy docker in an immutable rebuildable way using a minimal image? I think that’s wishful thinking. In short, containers provide a fantastic platform for isolation of our application and for scaling it. But when you try to _operationalize_ the application, the complexity increases to the point that it becomes nearly impossible to pull it off without making serious omissions that are going to bite you. Is there a better way? It looks like my friends at Chef have something that is quite intriguing: ## Habitat Last Summer, [Chef released Habitat](/posts/finding-habitat) as their application automation platform. Habitat is different in many ways to the previous categories, so much so that it deserves its own category. With habitat, I can script the build and execution of my application in a way that I would if I were just scripting it from scratch. But, unlike with the typical scripting mechanism, the scripting is _build into the application_ package itself. So Habitat is a package? Well it’s similar to that, yes. Habitat allows you to have a single file that represents the package. So I can provide QA the application and they can run it very quickly. Or as a developer I can run my application locally or on a container very easily. But unlike the packaging mechanisms listed above in the classic model, Habitat will isolate the package’s dependencies in order to give me the assurance that my application will run on any environment. Since I have that packaged deployment mechanism, I no longer need to fit the square peg (application deployment) into a round hole (configuration management). Instead, configuration management can do the things it is good at: making sure the machines on which your applications run are hardened and configured correctly. With the application deployment out of the configuration management code, the complexity drastically reduces and therefore the velocity of adoption drastically increases. And finally, Habitat will help you operationalize containers with a lot less complexity. It does this in two ways: First inside its package is a _contract_ with other applications that will help fulfill the real-time configuration needs of a rapidly changing environment. For example, if I’m upgrading an application, I may need that application to be taken out of the load balancer, or I may need for that application to talk to a database. That sounds easy in a classic model where these things may change only occasionally, but in a containerized world, these things change within seconds. Habitat helps you manage the relationships within your applications and therefore allows you to truly operationalize microservices. It’s also easy to take this package and run it as a developer in a simpler model. This is the genius of packaging these services with the app: you no longer have to deal with the complexity of a scheduler, or _something else_ that is there. Developers never like to have to throw the kitchen sink at something to just run something. They want to run something simple and get a production-like result. Habitat is the closest thing I’ve seen to achieving this goal. The second thing Habitat does to lower complexity for containers is that it builds an application and all of its dependencies from scratch. This provides the isolation needed to truly make the package portable, but it also provides a declared understanding of _what_ an application’s dependencies are. So if there is a vulnerability in one of the dependencies, it’s as easy as querying for that dependency, and then easily rebuilding that application with the newer dependency. With the lower complexity for deploying applications, it’s also quite easy to increase the maturity of an application’s runtime _without_ having to resort to using Docker and a scheduler. This way an organization can have a more gradual strategy for taking advantage of application isolation and increase the cultural and procedural maturity needed to pull them off safely. For the reasons laid out in this post, I’ve become a fan of Habitat in the past six weeks that I’ve been looking at it. Habitat has a shot at changing the game for application development and delivering on the promise and profitability of Continuous Delivery of our applications. However, there are currently some drawbacks one should be aware of before going down this route. First, Habitat is in its early stages. While I would be fine with putting this into production (in fact I’m days away from doing so), the tooling is not as mature as one experiences with Docker. Therefore, an adopter will need to rely on their fantastic Slack channel to get up to speed. The second negative to Habitat that I’ll call out is the learning curve, due to its bash-centric authoring model. There are a few abstractions I miss within Habitat. For example, when I’m telling Habitat where to find the source, I want to just give it the answer (for example, from GitHub). Instead, I have to create a shell script to do some things that are not quite straightforward. Also, when I want to build an application, I want to tell it _build a node application from this source directory._ Instead, I need to copy/paste a bash script I didn’t write and change the _right_ things within that script. I’m told by the product team that this will be addressed within an upcoming [blueprints feature](https://github.com/habitat-sh/habitat/issues/1951). When this feature is delivered, I will probably go from cautiously recommending it to wholeheartedly recommending it. The final negative to Habitat, for the next few weeks hopefully, is that there is little Windows support. Many of our applications rely on Windows to run, so our value of this platform will greatly increase when that is delivered. ## Conclusion There are many approaches to application deployment automation: Scripting, Packaging, Configuration Management, Containers, and Habitat. Of them all, I believe Habitat has the greatest chance at delivering a scalable, cloud-ready, and operational application deployment mechanism that can truly realize the promise and ROI of DevOps for application developers. I highly encourage those of you interested in this topic to being following the project and contributing with feedback and implementations. There may be a time in the future where Chef is known more for Habitat and InSpec than for Chef, just as Apple is known more for their iPhone and iPad than their Mac. If the Habitat team delivers on the transformative vision they’ve laid out, that day will come very soon. --- # Policyfiles Update URL: https://hedge-ops.com/posts/policyfiles-update/ Discover the latest updates on Policyfiles and its integration with Chef Automate. Learn about the future of automation and how Policyfiles can enhance your operations. I wanted to share with you some great news [about policyfiles](/posts/policyfiles) and let you know what I’ll be up to on this blog over the next couple of months. Over the past six months or so, the Chef product team and I have been working together to map out our partnership. In the early days, their support and coaching [were absolutely essential](/posts/technology-partnership), but as we’ve matured, their [Chef Automate](https://www.chef.io/automate/) product has become an essential element of operationalizing Chef at scale within our large organization. It was clear to me after our extensive initial discussions at Chef Community Summit in October that we weren’t going to be able to abandon Policyfiles at NCR. The change management guarantees that Policyfiles give us are too central to our approach to automation with safety. Also, I didn’t have the time to go back and train everyone on a different and more complicated way. And finally, I could see from the highly attended and engaged open space we did on Policyfiles that this topic resonated with users. I then decided to do what I could do to help Policyfiles gain traction as a feature and then try to work with Chef to see what we could do in their product. In January, [Trevor Hess](https://twitter.com/trevorghess) became our Customer Architect, and we began working in earnest toward a solution for how to move forward. Trevor relied on his consulting experience to drill down to the essential elements of the solution and find the people that could help us. This led to Chef doing some research of their own to find out that this investment would indeed address a market need that has emerged within the last few months. So I’m happy to let you all know that Chef’s Product team has confirmed that viewing and filtering on Policyfile data in Chef Automate has made it onto the roadmap for this year. Over the next week or so we’ll be working on getting Policyfile data from the Chef Server to a Kibana report that others who use Policyfiles can take advantage of as well. The future of Automate is quite bright, and we’re thrilled to be a part of it going forward. Their investment in product management and UX is paying off tremendously. This is not the product I started with in 2014; it’s got a vision, team, and experience that is going to take Chef where it wants to go in the enterprise. I’m so happy that Policyfile users will get to take part of all that goodness. In May, [I’m going to speak at ChefConf](http://sched.co/9vZD) on Policyfiles. In the meantime, I’m going to blog in detail about my approach to Policyfiles and Chef overall, in hopes that it will begin a movement among the Chef community to simplify the approach and thus broaden the adoption of Chef. As always, if you have any questions about Policyfiles, a few of us are active in the #policyfiles channel on the [Chef Community Slack](http://community-slack.chef.io/). Let’s talk there. --- # A Chef Custom Resource URL: https://hedge-ops.com/posts/chef-custom-resource/ Explore the process of creating a Chef custom resource for your certification exam preparation. This blog post provides a detailed walkthrough of the process, helping you understand and implement custom resources effectively. I have been working toward my Chef certification here lately, and my husband came up with this really cool [kata](https://github.com/mhedgpeth/chef-by-example) that I’ve been working on lately to study up for my first exam. A kata is something that you do over and over for training and for the purpose of bringing the broken parts of the process to light. It’s origins are in karate, and I’m sure you’ve heard of how it was implemented at Toyota with their famous Toyota-kata. I really love this kata that Michael created because I can: 1. copy and paste the tasks into my [Checkvist](https://checkvist.com/), 2. create a [base cookbook](https://github.com/anniehedgpeth/chefkata), 3. create a branch, 4. run through the kata, knocking out each task on my Checkvist as I go, 5. and then create another branch off of the base cookbook the next time I go through the kata. It’s been perfect for me. It causes the things that I just don’t understand to really stand out so that I can focus on them a little more. So one of those things that kept getting me stuck was [custom resources](https://docs.chef.io/custom_resources.html). For me, the documentation just wasn’t enough. So I’m going to explicitly explain this one custom resource that I had to make so that I can come back to this and remember. Maybe it’ll help some of you, too! ## Why I wasn’t getting it Here’s what the Chef docs say: ![Chef Documentation - Property](/article_images/2017-02-10-chef-custom-resource/chefdocs.png) Honestly, when it all came down to it, I realized that I didn’t understand the documentation because I didn’t know the proper names for all the parts of the resource. My understanding now of a very basic resource declaration is this: ```ruby resource 'name' do property value action :value end ``` `resource` is the type of the resource. `name` is the name of the resource. This can also be the value of a property if you don’t assign one. `property` is any word that you give to the property for use in the resource and not in quotes so that you can use it as a variable. `action` is a property of the resource that tells `chef-client` what to do. `value` is the value that you’re giving to `property`. ## My Recipe’s Starting Point Some of the tasks in the [kata](https://github.com/mhedgpeth/chef-by-example) are: - Run the command `echo ran command > /var/website/command.txt` - Don’t run the command the second time Chef converges (i.e. make it idempotent) - If the command does run, do a `git pull` of the architect repository into [`/var/website/architect`](https://github.com/pages-themes/architect). It shouldn’t pull the repository every time. - Refactor your command and pull into a custom resource called `chef_training_website`. Okay, so those first three tasks leave me with these two resources (note: I did change the repo that he gave as an example): ```ruby execute 'ran' do command 'echo ran command > /var/website/command.txt' not_if { ::File.exist?('/var/website/command.txt') } end git 'chefkata' do destination '/var/website/chefkata' repository 'https://github.com/mhedgpeth/chef-by-example.git' action :nothing subscribes :sync, 'execute[ran]', :immediately end ``` There are a couple of reasons we’d want to make a custom resource. 1. So that we can simplify the recipe for better readability 2. So that we can call this resource in a simple manner elsewhere in the cookbook, possibly with variables in it which change it So how do I make that whole block (above) into one custom resource? First, I’m going to show you what I ended up with, and then I’m going to show you what each thing means. ## My Custom Resource Sibling to my `recipes` directory, I created a `resources` directory. Within that, I created a Ruby file that was just for that one custom resource that I wanted to create. I called it `chefkata.rb`, and put this in it. ```ruby resource_name :chefkata property :kata, String, name_property: true action :create do execute 'ran' do command 'echo ran command > /var/website/command.txt' not_if { ::File.exist?('/var/website/command.txt') } end git 'chefkata' do destination '/var/website/chefkata' repository kata_repo action :nothing subscribes :sync, 'execute[ran]', :immediately end end ``` `chefkata` is the name of the resource that I called in my recipe after this was created. `kata_repo` is the property, which is just like what `command` is in the execute resource (`execute` being the resource_name). `name_property` is the thing that you put in quotes after the `resource_name`. It’s marked as `true` so that you can call the resource without the `name_property` (in this case `kata_repo`). For example, these two resource calls are the same: ```ruby directory 'website' do path '/var/website' end ``` _and_… ```ruby directory '/var/website' ``` By omitting the `path`, which is the `name_property` for the `directory` resource, Chef will set the `path` property to `/var/website` because that’s what I set the `name` to. So really, each resource has a different default `name_property` that you can find in docs.chef.io. ## In the recipe After that was finished, I was able to then call that resource in my recipe, which looked simply like this: ```ruby chefkata 'https://github.com/mhedgpeth/chef-by-example.git' ``` As you can see, I substituted the `name` for the `kata_repo``property`. I could have also written it like this: ```ruby chefkata 'example' do kata_repo 'https://github.com/mhedgpeth/chef-by-example.git' end ``` ## Concluding Thoughts I have to admit that the way resources are created doesn’t feel all that intuitive to me just yet. It could very well be that I just haven’t used Chef enough for it to be intuitive yet; that’s what [Michael](http://hedge-ops.com) says, anyway. But that’s what this kata is for - to practice over and over until it is ingrained. --- # Our Superbowl URL: https://hedge-ops.com/posts/our-superbowl/ Discover how NCR revolutionized their delivery process with Chef, enabling customers to grow their businesses beyond traditional boundaries. This is our Superbowl story. In the summer of 2014, I became convinced that NCR needed a more agile and consistent delivery process to safely enable our customers to grow their businesses beyond the four walls of the restaurant and into the new opportunities presented by mobile first consumers. While I was [no stranger to continuous integration](/posts/christmas-with-teamcity) and a testable pipeline for releases (what I was pleased to learn people call DevOps), I realized that our customers were not going to feel the full power of our investment into their success until we created a safe, repeatable, rapidly executable pipeline to transfer that value to them. I remember meeting with multiple people inside and outside NCR who counseled me to get closer to our revenue and away from the _cost_ focused operations space. I couldn’t let go of my intuition; if our customers weren’t realizing the true value of our development, then those investments are by definition worthless! So I _had_ to dive into this opportunity and work to change the game for our customers and NCR’s future. That future became the present just a few short weeks ago on a Sunday evening. A key customer wanted to go all-in with an online ordering promotion for the Super Bowl this Sunday. They demanded capacity from us that went beyond our ability to provide by just adding a few more machines. We decided to best serve this customer and all of our customers that we were going to build another production environment from the ground up on one of our most complicated but strategically important SaaS products. When I started this journey, this would have been an unthinkable level of risk. But since our forward-thinking leadership invested in our partnership with Chef, the new environment was provisioned in a fraction of the time that we expected, and with full safety and consistency that we needed to be confident that we could meet our targets. When our customer did stress tests, the product team was able to quickly react to any issues, and we had the confidence that we were going to increase stability through rapid change. At one point on Sunday evening, we ran into a snag with our deployment. We had a problem with Chef that would have blocked us from moving forward. I got on slack and asked a member of my customer success team, Thomas Cate, for help. He directed me to customer support. Within ten minutes of filing the issue, Zach Zondlo, on a Sunday evening at 8PM Central Time, responded and got on our conference bridge to help us out. The problem was resolved in ten minutes. That experience alone was a mic drop moment for our partnership with Chef. The people at Chef know how important operations is and culturally assign the priority and dedication that high severity situations warrant. As we approach the big day on Sunday with a customer who will grow their revenue in a way they couldn’t have imagined just a few short years ago, I’m reminded of how my our partnership with Chef has meant to NCR and to me personally. I’m reminded of the early days when Matt Stratton believed that we could do this, even when I wondered how we were going to get everyone on board with such an ambitious and forward-thinking plan. I’m reminded of the patient and understanding ear Justin Redd gave me as he walked me through the difficult and frustrating early days when I had to get so much operational alignment from so many people in order to make Chef a reality in production. Justin never came to me with a formula. He and his team listened, and helped us down a path that was good for us first. They knew if we succeeded they would succeed. I’m reminded of my newfound respect for sales and my long strategic conversations with Brittany Shaeffer. Brittany has helped me realize that value needs connection to be realized. So many times those of us with an engineering background think that technical outcomes will just stand on their own merits to the business. The sales organization at Chef does a phenomenal job at helping me make that value a reality for all stakeholders, so the business can provide the fuel necessary achieve a high velocity delivery model. And I’m reminded of my friend Nathen Harvey who welcomed this BigCorp Texan into a world in which I felt more than a little out of place. I was only months into my Chef journey and for some reason submitted a talk to ChefConf that was accepted. By the time I got there I was completely convinced that I didn’t belong there. Nathen personally welcomed me into the Chef Community and showed me that I belonged there. I found this spirit of acceptance and inclusion so compelling that I felt safe enough to introduce [my wife Annie](/about/annie) into it. And the community has shown a tremendous amount of [respect and support](/posts/leaning-in) to Annie which has already had a profound effect on our future. I’m so happy to have gotten to this milestone in our partnership with Chef. I’m so happy to have leadership that believed in the vision and gave me the freedom and resources to execute that vision. I’m fortunate to know so many people within NCR that were able to take a chance and believe in what we were trying to accomplish. It truly takes a village to get anything done in a large organization, and I’m fortunate to be a part of a great one. Our partnership with Chef is only beginning. As we scale our solution to use Chef Automate, secure our expansion with Compliance, and add even more of our product suites to Chef, we’re confident that the best is yet to come! --- # Introducing Cafe URL: https://hedge-ops.com/posts/introducing-cafe/ Discover Cafe, a new project designed to simplify running Chef in a Windows environment. Learn about its features, installation process, and how it can streamline your Chef operations. I was fortunate enough to be at Chef Summit in Seattle last November and learned two very valuable things there: First, I learned that the core power of Chef is in its community and ecosystem. Within this ecosystem we can depart from the user customer/vendor relationship where you’re at the mercy of a product team and may or may not have enough sway to get your stuff done. Instead, you can work with the community to contribute your own stuff. This inspired me to be a contributor instead of just a taker. The second thing I learned was that the Microsoft ecosystem was alive and well, but had a really hard time getting Chef to run in a consistent way on Windows. So I decided to do something about that over my holidays and a few long nights, and have come up with a new project I’m introducing today: [cafe](https://github.com/mhedgpeth/cafe). Cafe exists to make running Chef in a Windows environment easier. It takes my over two years of experience with Chef on Windows and simplifies and streamlines how I think it should go. And fortunately, I’m able to rely on my software development background to create a product that will feel like an easy-to-use, real product to people. So if you’re still reading, and I hope you are, let’s go through a demo real quick, or if you’re more visual [watch my demo on YouTube](https://www.youtube.com/watch?v=QxHi01vBkiw). ## Installation Cafe is a standalone program that is fully operational by unzipping files into a folder and running `cafe.exe`. No ruby or .NET dependencies. It just works. To install: 1. Unzip the installation package into a folder 2. Run `cafe init` if you want it added to the path (you’ll need to reboot) 3. Run `cafe service register` to have the cafe server run in the background, so it can do things for you ## Runtime Cafe is lightweight. To run the service it takes around 20MB of memory and no CPU. This means that you can put cafe on all your nodes, then install and run Chef as you want to. ## Walkthrough After installation, let’s work on getting Chef bootstrapped on the machine. The first step is to download and install [inspec](http://inspec.io/): ```shell cafe inspec download 1.7.1 ``` Once the inspec installer is downloaded, let’s install it: ```shell cafe inspec install 1.7.1 ``` Next we will do the same with the [Chef Client](https://docs.chef.io/ctl_chef_client.html): ```shell cafe chef download 12.16.42 ``` And then install it: ```shell cafe chef install 12.16.42 ``` Now that we’ve installed Chef, let’s bootstrap it. You can do this two ways: 1. [The Policyfile](/posts/policyfiles) way: ```shell cafe chef bootstrap policy: webserver group: qa config: C:\Users\mhedg\client.rb validator: C:\Users\mhedg\my-validator.pem ``` 2. The Run List Way: ```shell cafe chef bootstrap run-list: "[chocolatey::default]" config: C:\Users\mhedg\client.rb validator: C:\Users\mhedg\my-validator.pem ``` Both ways ask for a config file that will be your `client.rb` on the machine and a validator used to ask the Chef Server for validation. Now that we’ve bootstrapped Chef, we can run it again on demand if we want to: ```shell cafe chef run ``` We can even look at the `logs` directory and see that we have a rolling log that only has our `c`hef-client` runs in it. We can also see specific logging for our client and server. We probably want to schedule Chef to run every 30 minutes or so. To do this we edit our `server.json`: ```json { "ChefInterval": 1800, "Port": 59320 } ``` And restart the cafe service: ```shell cafe service restart ``` At some point you may even want to pause Chef on the node, so you can manually check a node’s state without fear of Chef changing anything. To do this, run: ```shell cafe chef pause ``` And then when you’re ready to rejoin the land of sanity, you can simply run: ```shell cafe chef resume ``` ## Conclusion If you’ve spent any time getting Chef to run on a Windows infrastructure, you should be pretty excited right now. If that’s you, please try it out and let me know how it’s going for you. I’d like to get a community around cafe to become the standard for how we manage Chef in a Windows environment. --- # How I’m Learning Docker URL: https://hedge-ops.com/posts/docker/ Explore my journey of learning Docker as a part of my DevOps training. I share my experiences with inverted and relative learning styles, and how they helped me understand complex concepts like virtualization and isolation. My [DevOps Training Plan](/posts/devops-training-plan) is going swimmingly, thanks for asking. ;) I shared with you last week about how I learned how to use the [Jenkinsfile](/posts/jenkinsfile) for a CI/CD pipeline. That was a lot of fun, and this week I’m inserting Docker into the mix! I honestly didn’t think I’d be able to grasp [Docker](https://www.docker.com/) very easily because of the advice of a few people and also because I didn’t understand virtualization and isolation very much. However, the purpose of this post is to show you why that trepidation is actually an important thought to consider as we seek ways to lower the barrier to entry into technology. How can we find ways to mitigate those fears and miscalculations? ## What is Inverted Learning? Both the [Jenkins](https://app.pluralsight.com/library/courses/jenkins-2-getting-started/table-of-contents) and the [Docker for Windows](https://app.pluralsight.com/library/courses/docker-windows-getting-started/table-of-contents) Pluralsight courses that I took were taught by [Wes Higbee](https://twitter.com/g0t4), a very gifted teacher. In the Docker course he mentioned the concept of _inverted learning_ that piqued my interest because I realized that [I learned](/posts/inspec) [InSpec](http://inspec.io/) in exactly that way. I didn’t have to know the whole of IT to know how to write an audit control, but as I went along, I learned more and more about the things that I was auditing. I had no clue what a [kernel parameter](http://inspec.io/docs/reference/resources/kernel_parameter/) was at first, but that didn’t mean I couldn’t write a simple test for it. InSpec lowered the barrier to entry into technology for me, and it’s important to understand why so that we can 1) lower the barrier to entry for others, and 2) learn other things in that same way. ## What about Relative Learning? Intrigued by the concept of inverted learning, I went onto [Wes’s](http://www.weshigbee.com) blog to see if he had written about it, and I found a post on [Relative Learning](http://www.weshigbee.com/relative-learning/), which in many ways is kind of opposite. It’s a learning style in which we can draw conclusions about something new based on the relative knowledge of the topic that we already have. This again gave a term to a concept for which I was familiar. Except I recognized this concept because of my desire for its benefits as opposed to my experience with it. I’ve derived a lot of frustration because I feel like I am lacking in a lot of background IT knowledge that makes it easier for others who have been in the industry for longer to grasp new concepts. I simply don’t have the luxury of the extensive relative learning in that way (sure I have a little, just not a ton). ## What about Docker? Wes mentions how you may employ inverted learning to grasp Docker, as well, which is probably why I was able to understand it much more quickly than I thought. The way it happens is that Docker allows you to use software without knowing how to set it up. So then when you’re ready, everything is consistently documented for you to learn how to set it up later when you see your [Dockerfile](https://docs.docker.com/engine/reference/builder/). Can you see the inversion? Docker sort of helps you to learn software backwards, like InSpec did for me. He goes on to assert that [Docker and containers are about software](http://www.weshigbee.com/docker-and-containers-are-about-software/). And I highly recommend you click the link to see his simple comparisons. ## Relative and Inverted Learning with Docker By now I hope you can see how both relative and inverted learning are key with Docker. For relative learning, you might take the things that you know about installing software (almost everyone has some knowledge of this from which they can draw) and/or your knowledge of virtualization and isolation and draw conclusions about containerization from that. Conversely, if you have no clue about virtualization, you can learn in reverse by spinning up a container using a Dockerfile and unpeeling the onion to see why that isolation is so important. ## Concluding Thoughts I don’t know about you and your journey, but these learning styles bring me a lot of hope and perspective as I dive into super complex and uncharted territory for me. It helps me to realize that there is much more within reach than I formerly believed because really you just have to find your angle; is it going to be easier for you to learn X from a relative, inverted, or other learning standpoint? Whether you’re learning yourself, which I hope everyone is, or you have the responsibility to create a learning environment within your own organization, or you’re teaching your own kids and/or loved ones, remember that there is more than one approach to internalizing concepts in technology. Happy Learning! --- # Funnel URL: https://hedge-ops.com/posts/funnel/ Explore the concept of the sales funnel through the lens of NCR’s history and its application to technology change initiatives. Learn how to effectively manage and implement change in your organization. Years back when [NCR bought Radiant Systems](https://www.ncr.com/news/newsroom/news-releases/hospitality/ncr-completes-acquisition-of-radiant-systems), there was a lot of talk about a new term I hadn’t heard until then: _the funnel_. It took me a few years to understand this concept and how it relates technology change initiatives but NCR’s rich history as an effective sales organization was instrumental in gaining that understanding. Over a century ago, NCR established itself as a pioneer in defining a sales process for how its products are bought. It had a revolutionary new technology, the cash register, that would massively change businesses who adopted it. At the same time, there were a lot of prospective customers who didn’t know how the new technology related to _their_ needs.NCR created a sales process that brought people into a set of stages that led them to the sale. Fast-forward to today, and it is clear that the commitment to excellence in defining the sales process is alive and well at NCR. I was having lunch with the VP of Hosted Solutions on the sales side a few months back (thanks to Adam Jacob [imploring me to having lunch with my sales people](https://youtu.be/_DEToXsgrPc?t=2041)). My sales friend brought me through the sales process and how it rolls up all the way to the highest levels of leadership at NCR. On that day I got a new appreciation for how results-oriented and well-designed our a sales organization is. Over the next few months, I began thinking about how all of what I’ve learned about sales at NCR has related to what I’m trying to do. The truth is, if I’m really trying to be a change agent in my organization, I’m not that far off from those sales people in the early 1900’s who were trying to bring the cash register to small retailers. There are a few people who are excited and adopting, a few people who are outright hostile, and a lot in the middle who can and will be persuaded when the time is right. This is where the funnel concept in sales is important and helpful. In a sales funnel, you have stages of the funnel that demand more and more buy-in and resources from potential customers until they are to the point where they are a paying customer. Whenever I map out a change initiative, I think about the funnel, what the stages are, and which people are in which stage. At the very center of the funnel is the people who are committed to the change initiative and getting solid and visible benefits from the change. At first there is no one in this place, but you want to see who _might_ be in there in a few months. A little bit out, you have people who are excited, see the benefits of the change, but haven’t yet gotten to the place where those benefits can be shown to outweigh the costs. This is a difficult place to be because you’re asking these people to spend resources toward an end that is not at all guaranteed. The people in this phase will be different depending on the lifecycle of the initiative. At first, you’ll have forward-thinking people who have a highly critical business problem to solve. Toward the end of the change’s lifecycle you’ll have people who would not have been involved unless there was a long track record of success that they could draw from. I’ve learned to not judge the people in the latter camp, but to empathize with their needs that stem out of their own unique market forces and commitments. A bit further out, there is the part of the funnel where people are curious and happy about the change, but they aren’t ready to make the commitment needed for it to be successful. These people are kind of dangerous, because if you spend all your time trying to _make_ people who are interested but can’t commit to something, then you’ll end up killing your change initiative because you don’t have any results. It’s _absolutely imperative_ that before you really engage people with what you’re doing you first ensure that they are willing and able to make the investment needed to get to the finish line. Lots of people _want_ the change but aren’t able (for whatever reason) to make it happen. Then there is the group that is difficult to even put in the funnel: those people who are in some way hostile to what you’re trying to accomplish. These people are sometimes difficult to spot. A lot of times they’ll show up and say something like “My boss thinks that your initiative is really cool, and she wants me to check it out.” That could translate into a number of things. For one, the boss could have wanted a promotion but then was blindsided by this _other team_ that is doing great things and stealing the attention from the results she feels like merits a promotion. So she is sending a detractor into the mix to build a case that maybe the change initiative isn’t as great as we think it is. That happens. That’s politics. The key thing to remember with the final group in the funnel is to not focus your resources on them. That’s difficult to do, especially because you don’t want to alienate anyone, and you want to try to help people make the changes that they need to make. In reality, maybe the _my boss said_ person is your next champion of your change initiative. Who knows? Usually with these people I create a small experiment to me see what they’re really seeing. It could be as simple as asking “why don’t you install ChefDK” or as complicated as asking “let’s do a POC on your product.” Whatever it is, it should be small enough to not distract me from other more important groups of people further into the funnel. I hope you’re able to see how the funnel concept can help you map out how you do a change initiative in an organization. I use it on every change initiative I take on as a guide to (1) knowing how to allocate my limited resources and (2) how to strategically move people through a process to where they adopt and embrace change. --- # Jenkinsfile URL: https://hedge-ops.com/posts/jenkinsfile/ The basics of configuring a Jenkinsfile to creating a CI/CD Pipeline in Jenkins. Includes version control, parallel commands, and an audit trail. Discover how to build, test, and publish. ## TLDR _Use Jenkinsfile instead of the UI so that one ~ahem~ well-intentioned person can’t ruin your build._ ## Resources > - [Jenkinsfile documentation](https://jenkins.io/doc/book/pipeline/jenkinsfile/) > - [Pluralsight: Getting Started with Jenkins 2](https://app.pluralsight.com/library/courses/jenkins-2-getting-started/table-of-contents) by [Wes Higbee](https://twitter.com/g0t4)—_serious shout-out to this guy. His classes are perfect - very thorough and clear._ ## DevOps, Version Control, and Jenkinsfile For real, though, one of the things I like most about DevOps principles is version control. Well, honestly, I have a love-hate relationship with it because Git still makes me sweat every time I do a pull request. Nonetheless, all DevOps starts with version control! It envelopes what [Chef](https://www.chef.io/) calls [_the coded business_](https://twitter.com/chef/status/783317258227548160), which includes the concepts of infrastructure as code, pipeline as code, testing, etc. The end result being total automation. Therefore, if you’re trying out a product and can’t make it do what you want it to do with code, then you should stop using it and find something else. So when you’re creating a CI/CD Pipeline in [Jenkins](https://jenkins.io/), I’m going to try to convince you to create the build using [Jenkinsfile](https://jenkins.io/doc/book/pipeline/jenkinsfile/) instead of the UI so that it is subject to your change control mechanisms already in place (source control) and so that one very well-intentioned person doesn’t ruin your build. ## Pipelines In super-simple terms, let me share with you my understanding of a Pipeline in Jenkins. While a _job_ is a defined process, and a _build_ is the result of that _job_ being carried out, a _pipeline_ is a defined series of _jobs_ that can be interrupted in between processes by different events such as failed tests, approval, et al. So when we use a Jenkinsfile which is written in [Groovy]() for Jenkins’s Pipeline plugin, we’re able to do a lot of things that you can’t do if you’re just creating a bunch of builds in the UI. I’ll show you a sample, and then I’ll tell you what I mean by that. ## Sample Right now I’m working on a build for [Michael’s dotnet core application](https://github.com/mhedgpeth/cafe/). The [Jenkinsfile code](https://github.com/mhedgpeth/cafe/blob/master/Jenkinsfile) below is going to do this: ![Jenkinsfile Pipeline](/article_images/2017-01-01-devops-training-plan/jenkinspipeline.png) Let’s take a look at the [code](https://github.com/mhedgpeth/cafe/blob/master/Jenkinsfile) stage by stage. ## compile ```groovy #!/usr/bin/env groovy stage('compile') { node { checkout scm stash 'everything' dir('src/cafe') { bat 'dotnet restore' bat "dotnet build --version-suffix ${env.BUILD_NUMBER}" } } } ``` In the _compile_ stage, we’re building the dotnet core application. First, we need to declare our stage with `stage('compile')`. After that we’re going to define what happens within our node. Now this gets super confusing to me because of the vernacular. We throw around words that mean different things in different contexts, and that totally doesn’t work for me. Nonetheless, what I think it means is that everything that happens within the `node` block is considered a _build step_ (at least in the context that I mention above) that will be given to an executor to carry out serially within the stage. The executor will then check our code out from source code and stash all the info that we need for later stages. Then we’re going to go to run `dotnet restore` from the `src/cafe` directory to get all the dependent packages to get ready to build. After that, it’s going to run`dotnet build`, and then we have a compiled application! ## test ```groovy stage('test') { parallel unitTests: { test('Test') }, integrationTests: { test('IntegrationTest') }, failFast: false } def test(type) { node { unstash 'everything' dir("test/cafe.${type}") { bat 'dotnet restore' bat 'dotnet test' } } } ``` Now here in the _test_ stage we’re going to run parallel stages. That means that we want them to run at the same time, so if that’s going to happen, then we need two different executors to do that. You can select how many executors (or worker bees, as [Wes](http://www.weshigbee.com/) calls them) you have, but there are two by default. That’s perfect for us, because we have just two parallel stages to run, `unitTests` and `integrationTests`. You’ll see there that I decided to define a method instead of writing out the whole thing since the only thing that changes from stage to stage is the test type. So it’s helpful for me to look and see what the method is defining first, and then go up and look at how it’s called. As you can see in the method, first we’re going to `unstash 'everything'` that we stashed in the _compile_ stage. The reason that we’re doing that is because we could possibly be running this stage on a different node than the one from which we checked out our source, so the files from the repo may not be there. But the master knows where you stashed it to begin with in that Pipeline job. Then we’re going to `restore` it (get all the dependencies loaded that we need) and then run the `test`. And that’ll happen for each test type that we called simultaneously. ## publish ```groovy stage('publish') { parallel windows: { publish('win10-x64') }, centos: { publish('centos.7-x64') }, ubuntu: { publish('ubuntu.16.04-x64') } } def publish(target) { node { unstash 'everything' dir('src/cafe') { bat "dotnet publish -r ${target}" archiveArtifacts "bin/Debug/netcoreapp1.1/${target}/publish/*.*" } } } ``` And finally we come to the `publish` stage. Here we’re running three parallel stages using another method definition, so we’ll actually need three executors. If we only have two to begin with, that’s okay because the third one will just get in line and run after. If you’ll look down at the `publish(target)` method, you can see that we’re `unstashing` in each stage again. And from the same directory as before we’ll `publish` the application to a specified platform. After that, `archiveArtifacts` makes that application available on the Jenkins server for you to do what you want with it. ## What you can’t do in the UI The UI is a typical UI, right. It’s there to help make some of the decisions for you. It wants to make your life easier, but everything in life is a tradeoff, so you have to sacrifice some functionality. - You can’t run parallel commands in the UI, just sequential. - You can’t commit it to version control and have an approval and promotion process in the UI. - You can’t know what changes were made in the Pipeline. The beauty of creating your Jenkinsfile for the Pipeline plugin is that you can manipulate it exactly the way that you want it. You have way more control and options than in the UI alone. The benefits that they make note of on [their website](https://jenkins.io/doc/book/pipeline/jenkinsfile/) are: - Code review/iteration on the Pipeline - Audit trail for the Pipeline - Single source of truth for the Pipeline, which can be viewed and edited by multiple members of the project. ## Concluding Thoughts I told you in my last post that I’ve set a training plan for myself this coming year, and Jenkins was at the top of the list. And it’s another one of those technologies that creates an inverted learning environment for me, as I touched on in my [last post](/posts/devops-training-plan). The more I discover these technologies, the more encouraging it is to me that I don’t have to know everything about everything to be able to add value. I can take something like Jenkins and learn a little about every aspect of deployment by creating builds. There are many learning opportunities wrapped up in technologies like this. More to come about inverted learning. My interest is piqued! --- # My Devops Training Plan URL: https://hedge-ops.com/posts/devops-training-plan/ Discover my comprehensive DevOps training plan for the new year. Learn about my projects, tools, and technologies I’ll be using and comparing. Follow my journey on the blog! Happy New Year, my friends! Despite the bad press that 2016 got, I actually had a pretty good year. It was arguably the year of the greatest risk, challenge, and also growth in my life. I’m so lucky to get to continue that growth at a company who not only supports that but encourages it. So I thought as a way to kick off the new year, I would share with you the plan I have for continuing my growth in technology this coming year. I’m so super excited about it, and don’t you know, I’ll be tracking my learning here at the ol’ blog. I will be comparing the different technologies and creating tutorials here and there. My projects will center around application deployment and the CI/CD pipeline that will support that. Honestly, it’s been a bit of a blackbox for me up until now, so I’m so excited for it to all start coming together. I’ve learned most of these technologies solidly but haven’t had the opportunity to see the magic of it all being used together. So here’s the low-down of what I’ll be spending every spare training moment for in the coming year. ## Development During my training these would be the everyday tools in my tool chest. Honestly, they have since I started, but I’ve been on projects lately where I have gotten out of daily practice, so I’m happy to get back into using them daily. - _[Git](https://github.com/anniehedgpeth)_—Will I ever stop getting nervous about branching? - _[Visual Studio Code](https://code.visualstudio.com/)_—I tried Atom, but I still like VSC better, maybe because it’s what I’m used to. - _CLI_—I’d especially like to be more proficient at the Azure CLI. - _[Test Kitchen](http://kitchen.ci/)_—The other day I learned that not everyone uses Kitchen, and I felt sorry for those people. ## Configuration Management While Chef is what I desire to focus on, I think it would be a great exercise to learn the basics of Ansible and DSC to be able to know the reasons I would choose one over the others. - _[Chef](https://www.chef.io/)_—It’s amazing how quickly one can forget things while not using them regularly! I’m hoping it’s like riding a bike. - _[Ansible](https://www.ansible.com/)_—I’ve never used Ansible, so that will be fun to see what it has to offer. - _[DSC / Powershell](https://msdn.microsoft.com/en-us/powershell/dsc/overview)_—There’s a consultant at my work that is a huge DSC fan, and he just learned Chef, so I’m planning on picking his brain when I get to this point. ## Security - _[InSpec](https://www.inspec.io)_—I’m excited about seeing it in other contexts and gaining way more comfort and familiarity with it. - _[Hashicorp Vault](https://www.vaultproject.io/)_—I’m hoping to unlock the mystery of Vault. Right now it’s a total enigma to me. ## Pipeline / CI/CD I think learning all three of these would give me a good basis for comparison. And since I’ve already worked with TeamCity, I’m starting with Jenkins. ![Jenkins Pipeline](/article_images/2017-01-01-devops-training-plan/jenkinspipeline.png) - _[Jenkins](https://jenkins.io/)_—In the short time I’ve been learning Jenkins, it’s way easier than TC, but that’s possibly because my project is simpler. I’ll be interested in the side-by-side comparison. - _[Team City](https://jenkins.io/)_—I’ve only worked on one TC project in the past, so I’ll be glad to get more experience with it. - _[Chef Workflow](https://docs.chef.io/workflow.html)_—This will be totally new to me, but I don’t know of a ton of people that would choose it over Jenkins, so I want to know why. ## Provisioning - _[Terraform](https://www.terraform.io/)_—A lot of people I look up to use Terraform religiously, so it’ll be interesting to see if my bias remains with Terraform after learning to provision with all three methods. - _[ARM Templates](https://docs.microsoft.com/en-us/azure/azure-resource-manager/resource-group-authoring-templates)_ - I’m interested in learning when I would use this over Terraform. - _[Packer](https://www.packer.io/)_—I haven’t worked with Packer at all, so this will be new, too. ## Azure This is a given since I’m working at an Azure shop, but I’m working on honing my Azure skills more as I use all of the above technologies to provision Azure. My focus, however, will be the following. - _[AD](https://www.microsoft.com/en-us/cloud-platform/azure-active-directory)_—I’m looking forward to the day that this isn’t such a pain in the ass for me to set up. - _[PaaS](https://azure.microsoft.com/en-us/overview/what-is-paas/)_—I see the industry going toward containers and/or PaaS, so I need to keep my head in the game with PaaS. - _Networking_—[Michael](/about/michael) and I were trying to set up a network at home for our lab here, but we hit a few dead ends. I’m surely planning on a tutorial for how we set that up because that will be a feat once we finish. Lots of trial and error. ## Containers Containers are all be new to me! But this is obviously the direction in which we can see the industry moving, so I’d love to keep up with it. I originally thought that it would be after more mastery of the other topics, but I’m working on Docker right now, and it’s more accessible than I thought it would be. - _[Docker](https://www.docker.com/)_—This is super fun to learn and not as complicated as I thought. - _[Mesosphere](https://mesosphere.com/)_—This is a little scary for me, but I’m excited about it. - _[Kubernetes](http://kubernetes.io/)_—This is new to me, too, so the more, the merrier. - _[Serverless](https://azure.microsoft.com/en-us/services/functions/)_—I don’t even know, seriously. This past week, I’ve been working on the first steps needed to take in moving forward with this training plan. I’m loving it! I’m working on Jenkins/Chef/Terraform/Docker in the next two weeks. Currently, I just created my first Jenkinsfile, and I am taking a Pluralsight course on Docker to extend the pipeline. It’s so fun! Stay posted for a blog post about how to create a dotnet core build in Jenkins! ## Concluding Thoughts Currently, I’m taking a [Pluralsight course on Docker](https://app.pluralsight.com/library/courses/docker-windows-getting-started/table-of-contents) by Wes Higbee, and he talks about how learning Docker is a sort of inverted learning because it allows you to use software without knowing how to set it up. So then when you’re ready, everything is consistently documented for you to learn how to set it up later when you see your Dockerfile. That’s exactly what I did with learning InSpec, though, so it was cool to hear him put a name to it. I had no idea of what InSpec was testing; I just knew that I was testing stuff. And so I was able to build from that and use InSpec as a springboard for further learning. With that in mind, one might consider Docker as an excellent tool for lowering the barrier to entry into technology. More to come on this topic because it’s truly fascinating to me and important to the industry. --- # 7 Options for Implementing Policyfile Attributes in Any Environment URL: https://hedge-ops.com/posts/policyfile-attributes/ Seven ways to move away from Chef node, environment, or role attributes to Chef policyfile attributes. This can be the hardest part of migrating to Policyfiles. When you start with [policyfiles](/posts/policyfiles) you quickly fall in love with the simplicity of the workflow and how easy it is to learn and teach. However, you’re also faced with an apparent show-stopper to adoption: there are lots of community cookbooks out there that expect certain attributes to be in certain locations. It can be quite confusing; I’m sure it’s kept a lot of people from adopting the feature. So let’s get that one out of the way in this post. We’ll take the use cases from easiest to most difficult: ## Options for Policyfiles (from Easiest to Most Difficult) ## Option 1 (Easiest): Define Attributes within Policyfile Many times you’ll come across a community cookbook that expects attributes to be defined for it to properly run, like with the `apache` cookbook. If the behavior of your cookbook doesn’t change very often, you can declare those attributes in your `Policyfile.rb` if you want to: ```ruby # in Policyfile.rb default['apache2'] = { listen_ports: ['80', '443'] } ``` That will get you by for simple situations, but if you’re dealing with half a dozen or more policies that use this cookbook, this will get very repetitive, and therefore error-prone. My rule is if you repeat yourself more than three times then [you need to do something about it](https://en.wikipedia.org/wiki/Don%27t_repeat_yourself). ## Option 2: Define Attributes within Wrapper Cookbook In this case, I would create a wrapper cookbook called `mycompany-apache` and define the attributes there. Then I can use that recipe in my runlist for all of my policies. ```ruby # in mycompany-apache/attributes/default.rb default['apache2'] = { listen_ports: ['80', '443'] } ``` In fact, as a rule of thumb, I generally try to keep attributes out of my policyfiles. It’s great for smaller cases, and if you just have a few and are getting started, by all means do it, but it creates an unmaintainable mess if you have a lot of machines that need to run against the same attributes. As time has gone on, I think of Policyfiles as defining _what_ Chef scripts should run on a node and something else to handle the configuration elements that those scripts need. ## Option 3: Define Environment-specific Attributes in the Policyfile With most if not all attributes now removed from my policyfiles, I come across a good reason to include them again: I need to have environment-specific settings that my cookbooks use. For example, let’s say that I need to use `testdatabase` for my `qa` environment and `proddatabase` for my `production` environment. You can do this pretty easily with Policyfiles: ```ruby # in Policyfile.rb default['qa']['myapplication']['database'] = 'testdatabase' default['production']['myapplication']['database'] = 'productiondatabase' ``` Now in my recipe code, I can simply write: ```ruby # in recipes/default.rb database = node[node.policy_group]['myapplication']['database'] ``` This is, frankly, how most of our applications work with Policyfiles. This has been good enough for us and therefore is what we went for. Since then, we’ve come across other use cases which cause us to go further: ## Option 4: Define Environment-specific Attributes in the Policyfile, Consume Them As Normal Attributes One of the major drawbacks of the previous section is the need to change your code to deal with the `policy_group` within the hash to get to your value. This is fine if you’re starting from scratch like I did, but that won’t work for everyone. Thankfully [code ranger](https://coderanger.net/) and friends created the [poise-hoist](https://github.com/poise/poise-hoist) cookbook, which handles a lot of the translation for you. In order to do this, just add `poise_hoist` to the `run_list` of your `Policyfile.rb`. Then, assuming you have the structure from the previous section, you’ll be able to get the database without using the `policy_group`: ```ruby # in recipes/default.rb, now using poise_hoist database = node['myapplication']['database'] ``` If your use of environments has kept you from using Policyfiles, you now no longer have any excuse. Yes, that’s right: you can use Policyfiles without changing a line of code by using the `poise_hoist` cookbook! ## Option 5: Define Role-specific Attributes in the Policyfile, Consume Them As Normal Attributes The same workflow we used above to migrate from environments can be used with our roles as well. We should first understand that roles don’t exist within policyfiles. To accomplish the same end, we use a wrapper cookbook that encapsulates everything we want that role to do. For example, you could have a base role that you want everything to follow by creating a `mycompany-platform` cookbook. Its default recipe could be something like this: ```ruby # in mycompany-platform/recipes/default.rb include_recipe 'logging_provider::default' include_recipe 'chef-client::default' ``` In that same cookbook you could also define attributes that control your cookbooks: ```ruby # in mycompany-platform/attributes/default.rb default['chef-client']['interval'] = 3600 default['logging_provider']['url'] = 'https://insanely-expensive.io' ``` If you have some elements that change by environment, use the techniques above to do that: `poise-hoist` will merge those elements into the places that your recipes will expect to look. For example, for the above section of code, if you wanted to make it environment specific, you would write: ```ruby # in Policyfile.rb default['qa'] = { chef-client: { interval: 900 } logging_provider: { url: 'https://test-cheaply.io' } } default['production'] = { chef-client: { interval: 3600 } logging_provider: { url: 'https://insanely-expensive.io' } } ``` ## Option 6: Support Lots of Environments Across Lots of Policyfiles with Data Bags The techniques outlined above work well for applications that have a minimal number of roles and environments. For example, we have one application with a web and application tier and three different environments. For that, we have our attributes declared in the `application-webserver.rb` and `application-appserver.rb` policyfiles and then flow those policyfiles through our pipeline from `qa` to `uat` and finally to `production` policy groups. This starts to fall apart when you need a lot of roles (or policyfiles) that use environment-specific attributes. At first glance, you might be tempted to create new policy groups, like: ```ruby # probably not a good idea default['qa'] = { my_application: { database: 'qaserver' } } default['michael-performance'] = { my_application: { database: 'mhperdb' } } default['mary-testing'] = { my_application: { database: 'marydb' } } ``` You’ll encounter a huge problem right away in that you have to copy and maintain these complex structures across a lot of policyfiles. That’s a recipe for something to go very wrong. Instead, we will offload the attribute definitions here to [data bags](https://docs.chef.io/data_bags.html). So we’ll have different data bags per environment: ```json { "filename": "environment_michael-performance.json", "my_application": { "database": "my-database" } } ``` In this example, we’ll still keep `michael-performance` as the `policy_group` for the node, but we’ll not define any of the attributes in the Policyfile, but instead, define them in the `environment_michael-performance` data bag. Before the application cookbook runs, we can merge what is in the data bag into our node attributes by borrowing what the [poise-hoist cookbook does](https://github.com/poise/poise-hoist/blob/master/lib/poise_hoist.rb#L38): ```ruby # I haven't run this but hopefully you get the idea environment = data_bag_item('myapplication', "environment-#{Chef::Config.policy_group}") Chef::Mixin::DeepMerge.hash_only_merge!(node.role_default, environment) ``` This will, as before, make it, so you can have Policyfiles and largely the same code as before because you were able to bring the environment data in from another source and merge it. ## Option 7: Multi-Dimensional Attributes with Data Bags and Policyfiles We have a couple of products that take this even further. You might have two dimensions of settings: in America, you one service, and in Europe you use another. This is true for all environments, but the environments have their own distinct settings. In this situation you can create two different types of data bags: `environment-uat` but also a `american-services` and `european-services`. Then you could have nodes know which environment they’re in and load the appropriate settings. You would have a couple of data bags: ```json { "filename": "american-services", "weather": "american-weather-services.com" } ``` and ```json { "filename": "european-services", "weather": "letempsenfrance.fr" } ``` Then you can merge that in as normal, based on timezone, or whichever element fits your situation: ```ruby service = Time.now().gmt_offset < 0 ? 'american' : 'european' service_settings = data_bag_item('my_application', "#{service}-services") Chef::Mixin::DeepMerge.hash_only_merge!(node.role_default, service_settings) ``` The long-term solution for much of this is to define it within a service discovery product like Consul. But that requires learning and adopting another thing, which will probably slow down getting the wins you’ll need early on to be successful. Get what you need to get done here, and then adopt other things that work for you one step at a time. ## Policyfile Nirvana—Infrastructure Versions Decoupled from Scripts When we start with policyfiles, as with the first few use cases above, we tend to put a lot of information in the policyfiles themselves. As things get more complicated, we start to shy away from that because it creates maintainability problems. I’ve grown in my usage of policyfiles to think of policyfiles as a mechanism for getting the right versions of the Chef recipes on the node to simply run them. That’s where they really shine; they’re an excellent dependency management/workflow simplification feature. They’re _not_ going to shine for the other things. So when I have a version of a website change and therefore need for my scripts to change the file they’re using to load that website onto a webserver, I shouldn’t use the policyfile for that. Instead, I can use the _same_ policyfile (or version of scripts) to download and install a _new_ version of my website. In this case, I’ve probably moved to the data bags-based definition of what that website is: ```json { "filename": "environment-qa.json", "website": { "version": "1.0.2" } } ``` If I’m going to upgrade that database, I probably want to just upgrade the data bag. The script remains the same. This, to me, is a nirvana situation. I’m running stable scripts/recipes in all environments, and I am changing small elements of how they run to respect what environment they’re in. I’ve avoided duplication and therefore increased the operability of the solution. So, take a lesson from me, if you’re dealing with a complex system with a lot of node types, decouple your application version from the scripts that are running. Your CI/CD pipeline will simplify, and it will be simpler to know what changed, why, and how it affects your situation. ## Conclusion If you follow the techniques outlined above, you’ll have no issue with migrating to Policyfiles. You’ll need to make sure that there is a solid business case for it. I think you’ll find that the return from better change management and easier operability will more than pay for the costs you’ll incur from using the techniques above. --- # InSpec Basics: Day 10 - Attributes with Environment Variables URL: https://hedge-ops.com/posts/inspec-basics-10/ Explore the final part of our InSpec Basics series, where we delve into using attributes with environment variables for testing different environments in TeamCity. Learn how to create and configure MySQL passwords using these attributes. My last post about attributes was really born out of this issue I had had creating an InSpec profile that tests the build configuration of a machine within a pipeline in TeamCity and testing all the different environments, making sure the correct Mysql passwords were entered in for each environment. In my last post, I had given you a crash course in how to use attributes, so now I’m going to show you how I used attributes to create the passwords that I needed using environment variables. But first, if you’ve missed out on any of my tutorials, you can find them here: - Day 1: [Hello World](/posts/inspec-basics-1) - Day 2: [Command Resource](/posts/inspec-basics-2) - Day 3: [File Resource](/posts/inspec-basics-3) - Day 4: [Custom Matchers](/posts/inspec-basics-4) - Day 5: [Creating a Profile](/posts/inspec-basics-5) - Day 6: [Ways to Run It and Places to Store It](/posts/inspec-basics-6) - Day 7: [How to Inherit a Profile from Chef Compliance Server](/posts/inspec-basics-7) - Day 8: [Regular Expressions](/posts/inspec-basics-8) - Day 9: [Attributes](/posts/inspec-basics-9) Okay, so I had to create a way in which my profile could read a variable for a password within a control. In this post I’ll lead you through how I did that. ## Here are the steps I took 1. [Query the Mysql database manually](/posts/inspec-basics-10#query-the-mysql-database-manually) 2. [Make the password in the control into an attribute](/posts/inspec-basics-10#make-the-password-in-the-control-into-an-attribute) 3. [Make mysql password attribute configurable](/posts/inspec-basics-10#make-mysql-password-attribute-configurable) 4. [Create a rakefile](/posts/inspec-basics-10#create-a-rakefile) 5. [Test it out in TeamCity](/posts/inspec-basics-10#test-it-out-in-teamcity) ## Query the Mysql database manually Before I could shoot off a bunch of code, I needed to make sure I could do it manually. So I needed to query the Mysql database in an ssh session. I was having issues doing this in Test Kitchen, so I knew the surefire way to get the proper output that I needed was to ssh into a real, live, development environment. So with access to that, I ssh’ed into it and ran the appropriate Mysql command to get the output I needed. ```shell mysql -uUSER -pPASSWORD -e "SELECT User, Host FROM mysql.user;" ``` That stdout was exactly what I needed to write the proper control that I needed to test that I had the right users set up in my database. So now I could go back to my control and hard-code the password to see if it would test properly. The control would end up looking something like this, but I added my hard-coded password as the default: ```ruby password = attribute('password', default: 'HARDCODEDpasswordHERE', description: 'password for admin user in mysql database') db = mysql_session('admin', password) describe db.query("SHOW DATABASES LIKE 'mydatabase'") do context "'mydatabase' database exists" do its('stdout') { should include 'mydatabase' } end end describe db.query('SELECT User, Host FROM mysql.user') do its('stdout') { should include 'admin %' } its('stdout') { should include 'admin localhost' } its('stdout') { should include 'user %' } its('stdout') { should include 'user localhost' } end ``` After some trial and error (it took a while to get to this point), it worked, and I was ready to move on. ## Make the password in the control into an attribute So you see up there how the password calls an [attribute](/posts/inspec-basics-9)? Well, eventually I would have to make an attributes yaml, but don’t worry, before that I just hard-coded the value. So I made a directory in my profile called `attributes`. Then I created a file in there called `attributes.yml`. The yaml was very simple, like this: ```yaml password: HARDCODEDpasswordHERE ``` Scroll down to the bottom of [_this_](http://inspec.io/docs/reference/profiles/) page for more info on it. So I tested that out to see if it worked on my development environment. From my profile directory on the command line I ran: ```shell inspec exec . -t ssh://USERNAME@DEVENV -i ~path/to/key/.ssh/id_rsa --attrs attributes/attributes.yml # OR --password=PASSWORD if not using a key ``` It worked; great! Let’s keep moving! ## Make mysql password attribute configurable So I still needed a different password for each environment that I ran this on, right? So this hard-coded yaml wasn’t going to cut it. I needed a different yaml for each environment. Enter the erb and rakefile. I created a template that builds this yaml each time for me. If you haven’t used an `erb` before, it’s basically a template that creates files for you. You have to run a `rake` command before you run your InSpec profile so that your desired file, in this case, our `attributes.yml`, is generated from the `erb`. First thing I did was to create another file in my `attributes` directory called `attributes.yml.erb` (same name as my `attributes.yml` just with `erb` at the end.) Now to figure out which environment variable to use for the database password. It was something like `<%= ENV['Password'] %>`. So I copied what was in my `attributes.yml` and pasted it into my `attributes.yml.erb`. Then I changed the hard-coded password to be the environment variable password. ```yaml password: <%= ENV['Password'] %> ``` ## Create a rakefile Once I had my template (`erb`), I needed to generate the desired file (`attributes.yml`). So to do that, I had to create another file in my InSpec profile called `rakefile.rb`. That’s the magic file that tells the `rake` command what to create. ```ruby require 'erb' task :default => :generate task :generate do Dir.glob('./attributes/*.yml.erb') do |rb_file| template = ERB.new File.new(rb_file).read, nil, '%' File.open(rb_file.chomp('.erb'), 'w') do |f| f.write template.result(binding) end end end ``` As you can see, this file is going to generate another file out of all of the `.yml.erb` files in the `attributes` directory (at this point there was just one). So first, I made sure my `rake` works. I deleted the `attributes.yml` ( copying and pasting its contents somewhere else to be safe is never a bad idea). Then, from my command line inside my profile’s directory, I ran `rake`. And guess what; it created my `attributes.yml`! ## Test it out in TeamCity So I won’t give you a tutorial in TeamCity, but I did need to test it out there, so I ran my same `inspec exec` command inside a development environment build configuration in TeamCity to see if the environment variables worked there. I did have to tweak it a bit to work within that pipeline but not a big deal. First, of course, I had set another build step to execute rake. After all of that we decided to wrap it in Ruby code and run the rake and profile that way, but all in all, it was just fine! ## Concluding Thoughts As I said in my last post, I learned that it is not a waste of time to do things manually first. It saves a ton of time and gains you better insight into what you actually need to code. Also, as far as my blog posts go, it looks like I’ll be pivoting away from the straight tutorials and moving more toward _how I did it_ type of posts. I was getting in the weeds about making it perfectly follow-able, but I got a lot of good feedback at the Chef Community Summit that it wasn’t really all that necessary. So there you go! --- # Chef Policyfiles—The Preferred Way to Package Chef URL: https://hedge-ops.com/posts/policyfiles/ Discover the benefits of using Policyfiles in Chef for managing dependencies and changes for nodes. Take control over change management, make Chef easier to learn and more secure. In the Chef ecosystem, _policyfiles_ are the preferred way to manage dependencies and changes for nodes. This post gives an overview of the feature, so you can get up and running with this. This feature takes away so many problems with the traditional environment and role-based mechanism for updating cookbooks. ## Why Policyfiles? Early on in my Chef adoption, [it became clear that I couldn’t deliver on the strict change management controls within the legacy Chef workflow without a lot of work](/posts/my-advice-for-chef-in-large-corporations). With the traditional Chef workflow, you can update a cookbook in production, and all of a sudden all of your nodes are running different code. Was it tested this way before? We hope so! _We hope so_ doesn’t cut it when you’re dealing with an enterprise as large and complicated as NCR. Our entire business rests on the trust our customers put in us to securely handle their financial transactions. With Policyfiles, you can guarantee that the exact same cookbooks that ran in earlier environments will run in later environments. You get real change management that is intuitive and doesn’t leave you trying to explain the intricacies of Chef dependency management while remediating an incident. _It just works._ Another benefit we get out of Policyfiles is it makes Chef easier to learn. Rather than burdening the user with a complex structure of roles, cookbooks, environments, and pinning, I can simply show them a Policyfile and show them the workflow I outline below. This greatly speeds up the time I spend teaching my colleagues Chef. I challenge the Chef veterans who are reading this to explain Chef to someone using the workflow outlined below and watch the magic: you’ll see that they really get it at the end, and they went from nothing to a working solution far more quickly than you’re used to. ## Policyfile Workflow The best way to understand the Policyfile feature is by walking through an example. We’ll configure a webserver for one of our apps with Policyfiles. ## Policyfile.rb file The first thing we’ll start with is the `Policyfile.rb` itself. A Policyfile declares the name, run list, sources, and attributes for a node or group of nodes. Though `Policyfile.rb` is the default name for the policyfile, you can name it whatever you want. On our projects, there are usually many Policyfiles: we could have `myapp-webserver.rb` and `myapp-database.rb`. The name that you use has to be unique in your Chef server. If you’re just starting out, the Policyfile will go in your application’s cookbook repo. As you advance, you’ll probably want to separate it into its own repository, because the frequent revisions of the lock file outlined below will clutter up your version control history. Over time, we have migrated all of our policyfiles into their own application-based repositories. ### Creating the Policyfile It’s always good to start out with a generated policyfile to make the adoption a little easier. There are two ways to do this: First, you could generate the Policyfile directly: ```bash chef generate policyfile Policyfile.rb ``` Or you can add the `-P` flag to the `chef generate cookbook` command: ```bash chef generate cookbook myapp -P ``` Either way, you have a Policyfile generated and ready to go. ### Basic Contents Once the Policyfile is generated, it should look like this: ```ruby name 'webserver' # will be used later in Client.rb on the Node default_source :supermarket, 'https://supermarket.mycompany.com' # this uses only internal cookbooks run_list 'recipe[myapp::webserver]' # the run list of recipes; won't contain roles # where to find cookbooks that are outside of the default_source cookbook 'myapp', git: 'https://git.mycompany.com/devops/myapp' ``` Let’s go over the elements: ### Environment-specific settings Pretty quickly you’ll run into situations where you have environment-specific settings. This is better avoided if at all possible; one possible solution is [to use Consul](https://youtu.be/TEvElu6Wnbc) to deal with environment-specific settings. However, [it’s also important to make progress](/posts/all-or-nothing-changes), so you’ll probably want to declare the settings in a structure that includes the `policy_group`. ```ruby # in the Policyfile: default['qa'] = { myapp: { database: 'qaserver01' } } default['uat'] = { myapp: { database: 'uatdbsrv32' } } default['production'] = { myapp: { database: 'proddbsrv62' } } ``` Then in our recipe code, we can reference the `policy_group` and easily get to our setting: ```ruby database = node[node.policy_group]['myapp']['database'] ``` Or you could take it one step closer and include the [poise-hoist](https://github.com/poise/poise-hoist) cookbook in your `run_list` and simply write: ```ruby # with poise-hoist, you can't tell if you're using policyfiles database = node['myapp']['database'] ``` If you want to learn about this in more detail, check out [my follow-up post](/posts/policyfile-attributes) that dives into this more deeply. ## Creation of the Policyfile.lock.json file Now that you have a declaration of what you want to run on a machine and your environment-specific settings declared, it’s time to create a point-in-time snapshot of _specific_ dependencies Chef will use on a node. This is your actual policy, and it is stored in your `Policyfile.lock.json` file. This is the file that your node will read to pull dependencies down and run them locally. To generate your `Policyfile.lock.json` file, run: ```bash rm Policyfile.lock.json # remove any old lockfiles first chef install Policyfile.rb ``` This generates the following important attributes at the top of the file: ```json { "revision_id": "6156a875a7c0eb06ce9gdc9e3d4f19809752942efd6dd20888ddd9fd8bbbd43b5", "name": "platform", "run_list": ["recipe[platform::default]"] } ``` Later down the file, we can see the output for one of our cookbooks: ```json { "windows": { "version": "1.40.0", "identifier": "54a9b2515c853919c4953893997899584d4cefba", "dotted_decimal_identifier": "23830481377985849.7253019596134776.168604533059514", "cache_key": "windows-1.40.0-supermarket.mycompany.com", "origin": "https://supermarket.mycompany.com:443/api/v1/cookbooks/windows/versions/1.40.0/download", "source_options": { "artifactserver": "https://supermarket.mycompany.com:443/api/v1/cookbooks/windows/versions/1.40.0/download", "version": "1.40.0" } } } ``` You can see here that there is a very specific declaration of the dependency for the cookbook. This is in the policyfile so if we wanted to regenerate all dependencies from this `Policyfile.lock.json`, we can do so as long as we still have connectivity to the repositories on which the dependencies are stored. It’s important to also note that the `identifier` here _also_ doubles as a checksum of the cookbook contents. If the contents change, but _nothing else_ changes, then `chef-client` will refuse to run the policy. This is a tamper-proof mechanism that increases your ability to predict what code will run on your servers. Remember, we are running this code with elevated privileges, so if you’re running in production, it’s incredibly important to predict what will happen. You can’t easily predict outcomes without policyfiles. ## Pushing it to the Chef Server Now that we have a lockfile built, it’s time to make the policy active for our nodes. If our nodes, Chef Server, and development machine are all on the same network, we can simply push the policy to the Chef Server directly. If you’re doing this within CI, and it’s possible on another agent or at another time, you’ll want to run `chef install` first to ensure the cookbooks are locally cached. The `chef install` command will _not_ replace the lockfile if it already exists. ```bash chef install Policyfile.rb # to ensure dependencies are loaded chef push qa Policyfile.rb ``` This will push the policy and all dependencies declared in the lockfile to the Chef Server for the `qa` policy group. Once you run this command, you can guarantee that you can run it on a node. No more remembering to upload a specific dependency; it’s simply there for you to run and will include the exact same cookbooks that are in the lockfile. The `qa` above is your policy group. A policy group, similar to an environment, is a logical group of nodes that you want to have the same policy. Since many times you’ll be using the same Chef Server to manage multiple environments, you’ll want to split your nodes into different policy groups, so you can make sure that you are flowing policy changes through a pipeline before they get to production. Also, note that you should _never_ run the `chef update` command. The results of this are not easily predictable, so I’ve stayed away from it. If you need to regenerate a lockfile, remove the old one and run `chef install`. If you want to push the policy, ensure that the dependencies are loaded with the `chef install` command and then push it with `chef push`. ## Setting up Chef Client To get your node to have the appropriate policy name and group, you need to update its attributes. The easiest way to do this is when bootstrapping the node itself: ```bash knife bootstrap mywebserver --policy-group qa --policy-name webserver ``` If you, like me, have a node-centric bootstrapping mechanism, your bootstrapper will need to update node attributes using the `-j` flag. First, create attributes with the `policy_name` and `policy_group` in them: ```json { "policy_name": "webserver", "policy_group": "qa" } ``` And then run: ```bash chef-client -j attributes.json ``` From there your node will use that policy. I used to manually add the settings to `client.rb` directly, but now know that this is bad because it will mean I have to manually update them again if I ever need to change it. Setting them in the node attributes directly allows me to change them remotely on the Chef Server. ## Packaging it for Air-Gapped environments You’re not always going to have a connected Chef Server available and may need to transfer your policy to an Air-Gapped environment. Policyfiles make this process incredibly easy because they package all dependencies into one file. To do this, start by running: ```bash chef export Policyfile.rb . -a ``` This will export the all cookbooks listed in the `Policyfile.lock.json` and the lockfile itself into a single archive. Now you can transfer this file to the air-gapped environment however you are used to doing so. This is an essential element of the benefits of Policyfiles in a security-conscious environment: You get to keep the same controls you have in place while you begin implementing Chef! Yes, eventually you’ll do a CI/CD pipeline like [Chef Workflow](https://docs.chef.io/workflow.html) but don’t let that get in the way of getting value out of the Chef ecosystem! That’s the absolute worst thing you could do. Create value early and often. Work around your existing controls and change the parts that you have buy-in to change. Repeat that and soon enough you’ll be in a good place. Once you’ve generated the archive and transferred the file to your air-gapped environment, it’s time to load it up on the Chef Server, you can run: ```bash chef push-archive qa Policyfile-6156a875a7c0eb06ce9gdc9e3d4f19809752942efd6dd20888ddd9fd8bbbd43b5.tar.gz ``` Again, we’re declaring a policy group here, but this is pretty much the same as the `chef push` command above. Your policy is active for that policy group on the Chef Server, and you can rest assured that all cookbooks are there ready to be used. ## Pipeline management We’re going to want to add this workflow to a pipeline that we can manage in CI. The process will roughly consist of: 1. Cookbook builds, which include running Test Kitchen, ChefStyle linting, etc. 2. Promotion to an internal supermarket (if you have one) 3. Updating pinned versions of those cookbooks in the Policyfile or in specific cookbooks through a pull request 4. Whenever the `Policyfile.rb` changes, or on demand, or when dependencies are updated, rebuild the `Policyfile.lock.json` file and check it in 5. Push the `Policyfile.lock.json` file to the Chef Server for locally available resources. If there is a pipeline, push to one policy group at a time and make sure they work before pushing out even further. 6. If there isn’t a Chef Server connected to your build environment, [post the policyfile archive to be loaded](/posts/artifactory) by your air-gapped environment. Much of this can be automated, but you’ll find that there is a step where you have to physically deal with the air-gapped environment (by definition). ## Which Policy is Active? As I said before, this revision id that is generated as a part of your lockfile will be the single identifier for this policy from here on out. So to see which policy is active you can simply run: ```bash chef show-policy webserver ``` Which will generate: ```text webserver ======== * qa: 6156a875a7 ``` Here you have the first ten characters of your revision id, and you have clarity with the exact version of the policy that is active for the `qa` group. If you’re checking in your lockfiles through a pipeline, this revision id should be stored with your lockfile in your git repo, and thus you can understand when it was created. You have a great understanding of the exact changes that went into your environment. Similarly, when you run `chef-client`, you see exactly the revision id and policy that is used: ```text PS D:\chef> chef-client Starting Chef Client, version 12.11.18 Using policy 'webserver' at revision '6156a875a7c0eb06ce9gdc9e3d4f19809752942efd6dd20888ddd9fd8bbbd43b5' ``` So at all levels, you have repeatability and traceability of all changes. ## Conclusion The Chef Community should further adopt Policyfiles because they are easier to learn than the legacy workflow, give you better control over change management, and are more flexible for security-conscious implementations. I recommend using Policyfiles for any significant Chef implementation in any enterprise. --- # Chef with Windows URL: https://hedge-ops.com/posts/chef-with-windows/ Explore the challenges and benefits of using Chef in a Windows environment. Learn from our experience at NCR and discover how to make a compelling business case for change. Recently [Peter Burkholder asked in the community](https://discourse.chef.io/t/chef-in-a-windows-monoculture-success-examples/9733/7) whether anyone was doing Chef at scale in a Windows environment and what lessons were learned along the way to make that happen. While we at NCR are certainly _not_ the first windows-oriented business to utilize Chef at scale, [we are doing it](https://www.youtube.com/watch?v=ZG3OZologLU&t=45s) and I have a lot of experience and ideas that could be helpful to others. Many of those ideas have been solidified as [my wife](/about/annie) has been recently working with a lot of Microsoft-oriented people, and I’ve had to explain the culture to her. At first adopting a non-Microsoft technology can feel daunting. No matter where you go within the Microsoft landscape there is either a competing Microsoft-endorsed technology or one that is rumored to be in on its way. So you get a lot of people who will see that something isn’t from them and just dismiss it outright. Another hurdle to overcome is that people within this culture view open source as chaotic and expensive. Many people outside the Microsoft ecosystem view open source as a driver for innovation and thus tolerate the chaos that happens when trying to get it to work. People who have spent their careers with Microsoft technologies don’t think that way; they want it to work, be intuitive, documented, and they want to get someone on the phone if something goes wrong. For these reasons, in an organization that has heavily invested in Microsoft, I would _not_ start with the awesomeness of the technology. This will get you nowhere. Instead, it all boils down to the business case for the proposed change. Do we need to do configuration management with Chef or do we need to use System Center or some other related technology? It’s a great question and probably one that should be considered through deep investigation. Find the business case by demonstrating that the current process isn’t working either by keeping costs high (usually labor) or delaying business opportunities (usually new development). The more you can get on the right side of that business case, the better time everyone will have. So if the business is using GPOs for managing configuration state on the active directory, then do a compliance scan against your nodes and see how they line up with the CIS benchmark for Windows Server 2012. Oh, wait…it’s total chaos. Why? Hint: people are using remote desktop to make your system unmanageable by making one-off changes…everywhere. Another hint: this is absolute insanity. Keep digging. Can you get a machine up and running quickly? Why not? Would Chef help with that? If you need to configure a third party tool like monitoring or logging, can you do that effectively? Sure it’s great when all you do is Microsoft, and it all fits together nicely, but is that realistic? What happens to your operations costs when we take away the UI when looking at the Microsoft stack (or even Windows Server 2016)? They will go way down, but you’re not going to get there without automation. Do you want to go to Azure? Do you realize that going to azure without an automation plan is like buying a tank, driving around a city (your business) and pushing random buttons? It’s going to cause damage if you don’t have a radical change towards automation. In other words, the problems you have been facing related to scale do not have anything to do with the fact that you had to call Dell before to get hardware racked. It’s everything after that too! So will System Center help you there? The answers to all these questions, as with many technologies, is…maybe Microsoft is the best way, but usually not. That’s another quite irritating aspect of Microsoft stuff. It can do everything. It solves everyone’s problems. So when you’re in this environment look at the results! Don’t let the Microsoft sales person or the single excited Microsoft-solves-all-problems person get you sucked into ignoring common sense for your business. If the tools you are using don’t drive you to the outcomes you want, then consider changing the tools and the culture behind those tools (the people). The real question is: what level of support do you need to get these things done? I think Microsoft is a fantastic platform for enterprise-level development, and they have an excellent cloud solution for enterprises. But they also have a long legacy and entire culture centered around the message that you can do IT with little training and a few button clicks. By the way, [this is the exact culture that Jeffrey Snover has fought for years and years](https://www.youtube.com/watch?v=3Uvq38XOark). Snover has done great things, but it’s important to remember that the culture he fought still exists, is going strong, and, even worse, is feeling threatened right now. So as a business who do you want to align with? Sure, you have a strong and great history with Microsoft and an entire staff that knows about it. But you also need a partnership with another company to get you to where you want to be in above opportunities. Chef is an excellent choice in this regard. You have a whole group of people at Chef Inc. who really get Microsoft (like [Matt Wrock](http://www.hurryupandwait.io/), [Steve Murawski](http://stevenmurawski.com/), and [Stuart Preston](http://stuartpreston.net/) (a partner), [Jessica DeVita](http://www.theubergeekgirl.com/), [Trevor Hess](https://twitter.com/trevorghess?ref_src=twsrc%5Egoogle%7Ctwcamp%5Eserp%7Ctwgr%5Eauthor) ( a partner) to name a few). This core brings Windows into the Chef ecosystem as a first class citizen. They advocate for DSC and align themselves with PowerShell/Snover. It’s a fantastic Windows configuration management platform. Also for a large 100+ node organization the other sell is that having a relationship with Chef gets you access to those best practices and people to accelerate the transformation. The consulting I’ve gotten from Chef regarding my approach is probably more valuable than the software itself because it has been absolutely critical to get us to the point where we can take advantage of the software. Now that we’ve covered the most important thing, the outcomes, let’s talk a little about technology: what about linux? If you focus on the business outcomes, create an early adopter groundswell of support, then the linux question should solve itself. If it doesn’t [someone is being an asshole](/posts/the-technical-asshole-curse). If that’s true take them to lunch and understand their needs, then incorporate that into your overall strategy. If they still don’t listen then by this point they’re clearly being an asshole, so make that reality visible to leadership and work towards getting around that person. The fact of the matter is that a company whose leadership is incapable of taking advantage of fantastic strategic and ROI business opportunities because of a few people who can’t handle learning anotherOS is not one with a bright future. Someone at some level should be able to see this. If they fail to see it after all that, then you are indeed on a sinking ship. That would be quite depressing if there weren’t so many non-sinking ships all around you that will embrace and love what you’re doing. In fact, [you should come work with me at NCR](https://www.ncr.com/careers). :) --- # Migrating from Chef Analytics to Chef Visibility URL: https://hedge-ops.com/posts/analytics-to-visibility/ Discover how to smoothly transition from Chef Analytics to Chef Visibility. Migrate your existing Chef Analytics Server to a Chef Visibility Server and to ensure a successful transition. The other day I was at lunch with my customer architect at Chef. We were talking about our situation. Our reference architecture fits Chef’s reference architecture from about a year ago, which consists of a Chef Server and an Analytics Server as the core solution. Change takes a while, so I had been delaying setting up a Chef Automate environment before I got some other things accomplished. However, at lunch we both were convinced that a [Chef Visibility node](https://docs.chef.io/visibility.html) could easily replace a Chef Analytics node and set our teams up better for where Chef is going. We would set the server up in a POC mode which would mean we had no real backup plan, but that’s OK because we are not really using Chef Analytics in a useful way. This strategy is pretty typical of one I’ve used lately: let people see what you’re talking about and interact with it, even if it’s not in the _perfect final state_. Then, with experience as a guide, take the next step the _right_ way. We will likely end up with a central Visibility server, but I don’t want to risk everything just to make that happen. So we set this up instead and move things forward. After the discussion with my customer architect, [Thomas Cate](https://www.linkedin.com/in/thomas-cate-9b63a28) helped me come up with this guide. I wrote everything down and thought it might be good to share with the community, in case anyone else was thinking of going this way. ## Migration Steps This will migrate an existing Chef Analytics Server to a Chef Visibility Server in POC mode. ### Back up your keys on the analytics server We’re getting ready to uninstall Analytics, but we want to keep the keys because we’ll use them later. - Find the key location in `/etc/opscode-analytics/opscode-analytics.rb`. The keys will likely be located in `/etc/pki/tls/certs` and `/etc/pki/tls/private` - Back up the keys to `~/cert-backup` ### Clean and uninstall Chef Analytics Let’s get rid of analytics now: - Run `opscode-analytics-ctl cleanse` - Run `opscode-analytics-ctl stop` - Remove the package `yum remove opscode-analytics` - Run `sudo rm -rf /opt/opscode-analytics` - Reboot server `sudo reboot now` ### Remove Analytics from the Chef Server On the Chef Server, - open `/etc/opscode/chef-server.rb` and comment out `ocid` for analytics - run `chef-server-ctl reconfigure` ### Install Automate - Navigate to the [downloads page](https://downloads.chef.io/automate/) and copy the appropriate link - On the Analytics Server, wget that link to download it to the user home, for example: `wget https://packages.chef.io/stable/el/7/delivery-0.5.370-1.el7.x86_64.rpm` for my CentOS box - Install the package, on CentOS: `sudo rpm -Uvh delivery-0.5.370-1.el6.x86_64.rpm`. When it asks you to configure the Delivery Server, say no ### Set up Licensing You’ll need to get a `delivery.license` from a friend at Chef. You also get your pem key for your Chef Server as a one-time use to authenticate to it and add workflow stuff. I’m not using that, so this was not really needed, but you know, we do it anyway. Run `sudo delivery-ctl setup --license ~/delivery.license --key ~/[your-name].pem --server-url https://[your-chef-server]/organizations/[your-org] --fqdn analytics.[your-domain].com` - When it asks for an organization, use your company name. The organization can be the same for different visibility servers. In fact, one can argue in this situation it’s quite useless; it’s more for multi-tenancy. You should probably copy the `server-url` from your `knife.rb` file. ### Finalize installation - run `sudo delivery-ctl reconfigure` - check that the settings in `/etc/delivery/delivery.rb` are good (especially the FQDN) ### Set up certificates Now it’s time to reuse your certificates that you backed up earlier. - Put the ssl certs in place `/var/opt/delivery/nginx/ca` - _Important:_ these files need to same name as what is there by default. So you should copy by name and change the file name of what you had before. - Restart nginx `sudo delivery-ctl restart nginx` - Make sure it’s working with `delivery-ctl status` ### Set up an Admin User - Run `delivery-ctl create-enterprise [enterprise] --ssh-pub-key-file=/etc/delivery/builder_key.pub` This will generate user information that will be output to your command line: ```text Created enterprise: company-name Admin username: admin Admin password: password-here Builder Password: builder-password-here Web login: https://analytics.mycompany.com/e/companyname/ ``` Navigate to that web login and you should see the automate login. Login with `admin` and the password it gives you. ### Set up more users At this point you can change the admin password to something you can remember and add users. This has to be separate from the Chef Server because there are many Chef Servers to one Visibility Server. Well, not in this case, but that’s the design. So you have two options: 1. Set up a bunch of users manually 2. Set up LDAP authentication following [these directions](https://docs.chef.io/integrate_delivery_ldap.html) ### Configure Data Collector on Chef Visibility server Now I’m going to set up a token for configuring a data collector for my clients to talk to the Chef Visibility server: - Create a Guid (on PowerShell I ran: `[guid]::NewGuid()`) - Add data collector configuration to `/etc/delivery/delivery.rb`: `data_collector['token'] = 'my-guid-here'` - Run `sudo delivery-ctl reconfigure` ### Configure Chef Server to report to Visibility You’ll also need [the Chef Server to report to Visibility](https://docs.chef.io/setup_visibility_chef_automate.html#configure-chef-server-to-send-server-object-data). To do this: - Add the following settings to `/etc/opscode/chef-server.rb`: ```ruby data_collector['root_url'] = 'https://my-automate-server.mycompany.com/data-collector/v0/' data_collector['token'] = 'TOKEN' ``` ### Configure Chef Client to report to Visibility You’ll need to make sure your `chef-client` is on a fairly recent version. For me, I need to update `chef-client` in my infrastructure to 12.15.19 for this to work properly. I had some problems on an earlier version. Now that we’re on the latest, let’s configure the `data_collector` on the node in the `client.rb`: ```ruby data_collector.server_url "https://analytics.yourcompany.ncr/data-collector/v0/" data_collector.token "guid-from-previous-step" ``` ### Configure Notification replacement for Analytics We were taking advantage of the notifications in Chef Analytics and needed a replacement. My colleague has written [a slack notifier report handler](https://github.com/jkerry/SimpleSlackHandler) for Chef that we’re now using. ## Initial Thoughts on Visibility First of all, I’m really impressed with the Chef team for thinking out of the box and getting me on their new platform in a way that works for me. This underscores once again why they’re a great partner: Chef doesn’t show up with an inflexible agenda; they listen, find the _right_ solution, and then execute. My first impressions of the product are that it looks very nice and clean. I can tell Chef has hired some UX people. :) Visibility definitely has room to grow, but it’s an exciting start to a great platform for making Chef operable at scale within the enterprise. While some in the open source community might elect to roll their own reporting platform for Chef-related stuff, to us that’s the path pain and discomfort, because we realize that we’ll never have the funding that Chef has to deliver this. And they like hearing from people like me give them feedback. So to me that’s the best of both worlds. I don’t like how the Visibility product is tied to workflow, though. I can understand how Chef wants to sell an all-in-one solution, but to me, it’s better to design your products with as little coupling as possible, so they can be independent. From their perspective, it may make sense; keep things simple so your support can be focused in one direction. I’m really glad I converted my Chef Analytics server to a Visibility server. Hopefully this helps you do the same. --- # Margin for Leadership URL: https://hedge-ops.com/posts/margin-for-leadership/ Explore the importance of creating margin in your day for leadership and mentoring. Learn from past mistakes and discover how to effectively manage your time for the benefit of your team. There was a phase in early 2014 where I had a _lot_ of new initiatives going. In retrospect, I was smack dab in the middle of a horrible career calculation. I had gotten excited about DevOps, told my management about it, and found myself on a project to get all of our development teams _operating_ on the same development tools. Needless to say I was in a lot of meetings with people who were looking at me angrily as a complete waste of their time. It was during this time that I really dropped the ball on mentoring. I would be in meetings all day, and the poor members of my team would need my help in growing and learning and I would be absent. They would try to catch me as I walked very briefly to my desk before going to the next meeting. It was terrible. I realized a valuable lesson during that season. If you’re in meetings all day and don’t have time for people around you, then you and others around you are going to stagnate. That may be fine for people at a certain level, but for most of us that is a very bad thing. Since then, I’ve learned to create margin in my day, so I can check in with people. I keep my calendar blocked during a portion of the day and I [focus on planning](/posts/planning-vs-execution) and also on _talking to people_. I reach out to them, make sure that we have an [open line of communication](/posts/exposing-the-unknown) and that they have a [solid plan themselves](/posts/internalizing-the-plan). If you’re wanting to mentor people, put room in your day for it. You _have_ to create margin in your position for leadership. Your work isn’t all about what you _do_ yourself today, it’s more about doing the smart things and enabling those around you to do the same. --- # InSpec Basics: Day 9 - Attributes URL: https://hedge-ops.com/posts/inspec-basics-9/ Discover the basics of InSpec Attributes in this Day 9 tutorial. Learn how to create an InSpec profile, declare attributes, use them in if statements, and run different tests. Ideal for database and webserver roles. Y’all, I was in [InSpec](http://inspec.io/) heaven a couple of weeks ago. I was on a [project](https://www.10thmagnitude.com/) where I was supposed to create an InSpec profile that tests the build and application configuration of a set of servers within a pipeline in [TeamCity](https://www.jetbrains.com/teamcity/) - smoke-tests. I had to translate a bunch of ServerSpec into InSpec and run the InSpec profile independently of the cookbook. Seems easy enough, but the challenge is testing all the different environments and using different tests for each node spun up. The client also wanted it to be in one step for all the nodes, not a different step for each one. But first, if you’ve missed out on any of my tutorials, you can find them here: - Day 1: [Hello World](/posts/inspec-basics-1) - Day 2: [Command Resource](/posts/inspec-basics-2) - Day 3: [File Resource](/posts/inspec-basics-3) - Day 4: [Custom Matchers](/posts/inspec-basics-4) - Day 5: [Creating a Profile](/posts/inspec-basics-5) - Day 6: [Ways to Run It and Places to Store It](/posts/inspec-basics-6) - Day 7: [How to Inherit a Profile from Chef Compliance Server](/posts/inspec-basics-7) - Day 8: [Regular Expressions](/posts/inspec-basics-8) If you’d like to follow along, then you’re welcome to go clone this [practice InSpec profile](https://github.com/anniehedgpeth/practice-inspec-profile). ## Here’s what we’ll cover 1. [Assessing our needs](/posts/inspec-basics-9#assessing-our-needs) 2. [Declaring the Attributes](/posts/inspec-basics-9#declaring-the-attributes) 3. [Use the attributes in an if statement](/posts/inspec-basics-9#use-the-attributes-in-an-if-statement) 4. [Create different attributes yamls to run the different tests](/posts/inspec-basics-9#create-different-attributes-yamls-to-run-the-different-tests) 5. [Concluding Thoughts](/posts/inspec-basics-9#concluding-thoughts) ## Assessing our needs So let’s say that you have two different roles that you want to test: database and webserver. And let’s keep it simple and just test one environment where you spin up one machine for each role. That means that we’re going to need two different sets of tests: - One for client tests - One for server tests If you’re following along in the [practice InSpec profile](https://github.com/anniehedgpeth/practice-inspec-profile), then you’ll see that there are three different sets of tests (well, really just one test in each control, but you get the picture.) We’re going to set it up so that we can run one test for each role. Now, the big bummer of this is that attributes don’t work for InSpec in Test Kitchen just yet like they do for [recipes](https://docs.chef.io/config_yml_kitchen.html), but I think that would be great if they did! (hint hint) Maybe sometime soon we’ll get that. ## Declaring the Attributes Let’s go over to our control and add the attributes hard-coded with a default value and see what it does. We’re going to declare the attributes above where we’re using them. So add this above your `client` control. ```ruby role = attribute('role', default: 'base', description: 'type of node that the InSpec profile is testing') ``` ## Use the attributes in an _if_ statement Now that the attributes are declared, we’ll need to wrap our controls in an `if` statement so that it only tests that block when we want it to. Your `client` control block is going to end up looking like this: ```ruby if ['client', 'base'].include? role control "Testing only client" do title "Tests for client" desc "The following tests within this control will be used for client nodes." describe user('client') do it { should exist } end end end ``` You’re saying, _If the node you’re looking at is a client or base, then run this control block_. You’ll do the same for the `server` control: ```ruby if ['server', 'base'].include? role control "Testing only server" do title "Tests for Server" desc "The following tests within this control will be used for server nodes." describe user('server') do it { should exist } end end end ``` What happens when you run it now? Well, nothing different yet, so let’s make that happen. ## Create different attributes yamls to run the different tests We’ll need to add a few attributes files to our profile to call on to change those roles. These are going to be yaml files, and while you may put them anywhere you want, I think it’s nice if they get their own directory inside the profile. Go ahead and create these now. ![Attributes Location](/article_images/2016-10-05-inspec-basics-9/attributes-1.png) In each yaml, put the respective attribute values. ```yaml # In attributes.yml role: base ``` ```yaml # In client-attributes.yml role: client ``` ```yaml # In server-attributes.yml role: server ``` We’re going to run these tests on our local machine, and while we know, obviously, that these tests will fail, we’re going to see how the attributes ran the different tests. So then, let’s watch it run just the tests for only base and client by running: ```shell inspec exec . --attrs attributes/client-attributes.yml ``` ![Test run results](/article_images/2016-10-05-inspec-basics-9/attributes-5.png) See how it didn’t include the server only tests? And now let’s watch it run the tests for just base and server roles: ```shell inspec exec . --attrs attributes/server-attributes.yml ``` ![Test Run Results](/article_images/2016-10-05-inspec-basics-9/attributes-6.png) See how it didn’t include the tests for client only? And there you go! That’s a simple guide to attributes! ## Concluding Thoughts I love this feature. It gives a lot of flexibility and control, and you can use it in a lot of different ways. The trick is to hard-code the attributes first to make sure it’s working. So just a little job update—I’m loving it over here at 10th Magnitude. I’m learning so much. Sure, I ask some dumb questions from time to time, and I feel really dumb about them later, but I am in the perfect position to learn a ton. \_Feeling grateful. Go to Day 10: [Attributes with Environment Variables](/posts/inspec-basics-10) --- # The Goal URL: https://hedge-ops.com/posts/the-goal/ Explore the journey of a tech newbie transitioning into a new software development style. Learn how clear goals and growth plans can transform careers and lives. A while back I was working with someone who wasn’t new to technology, but was new to [the style of software development](/posts/categories/process/) that my team was operating within. We worked on [getting honest about where he was at](/posts/exposing-the-unknown) over the first few weeks of working together. I could tell this was uncomfortable for him, and, frankly since then I’ve also grown in my empathy for the fear and difficulty experienced by being new with a new technology or process. I remember clearly talking to my colleague in a 1:1 that he wasn’t where he needed to be successful. He knew it. I knew it. That’s usually a bad place. That’s usually the place where people give up on each other. That’s the place where both parties start making plans to get themselves out of the uncomfortable situation. Instead of doing that, though, we talked about his goals. What did he want to get out of this job? Out of his life? What is he missing? Then we aligned his goals with mine by creating a growth plan that identified some training opportunities, coding challenges, and regular communication. A few years later and my colleague is doing well. He made the transition. I see so many managers take the easy way out by their inability to see what’s possible in people. If you had a chance to change someone’s life by helping them gain valuable skills and insights that will serve them the rest of their lives, why wouldn’t you take it? Watching lives change through clear goals has been one of the most rewarding aspects of my career. --- # Planning vs. Execution URL: https://hedge-ops.com/posts/planning-vs-execution/ Explore the importance of planning before execution in achieving success. Learn from real-life experiences and understand how to balance excitement with strategic planning in your endeavors. When you’re new at something, you’re excited and want to prove yourself, and you want to get to that _proving_ of yourself as quickly as possible. There is a lot of tension in your life through feeling _new_ at that thing, and that tension is uncomfortable. The other day [Annie](/about/annie) and I were talking about something she was working on. We were breaking the problem down into [steps that she understood,](/posts/exposing-the-unknown) and she was beginning to [internalize the plan](/posts/internalizing-the-plan). She was eager to put the plan into action and got excited about the newfound insights she had gained from our discussion. Then she made a tactical mistake: she got so excited about the plan, that she started working on that plan. That sounds great, and it is great in principle, but the problem with her approach to _just get started with it_ was that we really weren’t done planning out the whole of what she was working on to accomplish. This mistake is similar to what I’ve seen over and over with newer people in technology. Early on in my career, I worked with a friend of mine that always wrote everything down. I would kind of make fun of him…but then realized that his organization and plan meant that he executed at a level that I didn’t. So I started writing things down. I started making a plan. And I started really focusing on the _planning phase_ of my work. I was also able to separate the _planning phase_ from the _execution phase_. When you’re executing a great plan, it’s like the wind is at your back and everything is easy. When you’re working without a good plan, especially when you’re new to something, you get onto an emotional roller coaster of unclear expectations. The key to growth is making _finding the plan_ just as important as _executing the plan_. --- # Internalizing the Plan URL: https://hedge-ops.com/posts/internalizing-the-plan/ Explore the importance of creating and internalizing a plan in a technical context. Learn how honesty and understanding are key to growth and efficiency in technology. As I wrote about in [my last post](/posts/exposing-the-unknown) it’s extremely important when working with someone new to technology to create an honest and open relationship with them where they feel free and even supported to tell you that they don’t know something. I really love it when I try my best to explain something and the person I’m working with is free enough to tell me that they don’t have a clue what I’m talking about. A part of me comes alive whenever that happens because I know that honesty is the basis for growth. It’s easy in a technical context to focus on the technical journey that someone needs to make in order to be effective. Do they know proper source control, how to interact with a good coding editor in an efficient way, a great coding editor, and how to write tests? These are all important topics and ones that [Annie](/about/annie) and I covered extensively in the early days of her journey into technology. That’s the easy part. Here’s the hard part: are they able to create a plan and internalize it? I think that’s the true test to make sure we’re on the right path. It’s always horrible when I go over a plan with someone, and they look at me enthusiastically, nodding, then at the end they can’t articulate the plan to get there. If that happens, then [we have to work a bit more on honestly](/posts/exposing-the-unknown). What I work on early on in any mentoring relationship is that we have an honest conversation back and forth about where we are and where we are going, and break things down _at the level that the person understands_. Sometimes this means we create a list of things to do _in the next few hours_ that covers exact files to change and relate them to a diagram on the whiteboard we just drew. That’s fine with me as long as (1) they know what they need to do, for real, and aren’t faking it, and (2) they can internalize it. If you can’t internalize a plan enough to tell someone about it, you don’t know what you’re doing. No amount of googling is going to change that. You _have_ to plan ahead, and at a level that _you_ understand. Never let a mentor plan _above_ your understanding, and, if you’re mentoring someone, seek to find out where they are and ensure there is a solid plan _at that level_. --- # Leaning In URL: https://hedge-ops.com/posts/leaning-in/ Explore how Sheryl Sandberg’s ‘Lean In’ inspired a career change into technology. Discover the journey from studying for the GMAT to becoming a Cloud Automation Engineer. I knew when I finished listening to (thank you, [Audible](http://www.audible.com/)) Sheryl Sandberg’s heartening exhortations in [Lean In](http://leanin.org/book/) that my life was about to change. I was thinking that I should go back to grad school for the changes that I wanted in my life to materialize, and so I studied for six weeks and took the GMAT. After all of those long hours of studying, however, I realized that that wasn’t the life that I wanted for the next two years. I didn’t want to have to wait in the wings for two more years while I could be starting a career now! And so I got frustrated and antsy. I told you a little about how I got into technology—not a very [common story](/posts/introduction). Basically, I wanted a career change (I got my start in Film—capitalized because of the self-importance) but was racking my brain to figure out what I could both make a great contribution to and be fulfilled in. [My husband](http://hedge-ops.com) finally convinced me to try technology. He gave me an interesting problem to solve, taught me the necessary skills to solve it, and I found that I had no choice but to concede; I was hooked. I began my journey / experimentation in learning [InSpec](/inspec). It’s a framework written to test for security and compliance, and it was the perfect introduction for me. [InSpec](https://www.chef.io/inspec/) was written for security people who may not have a development background, and it allows you to begin simply at a basic level by writing controls but then later discover that it’s quite flexible in the [ways in which you can use it](/posts/inspec-basics-6). So I continued diving deeper into InSpec and learning [Chef](https://www.chef.io) along the way. As a part of my preparations, I wanted to immerse myself in the culture, too, so that I could learn as much as I could about the industr—where the technology is moving, what problems are waiting to be solved, and to see where I can solve problems and begin to contribute. I decided to blindly volunteer to be on the organizing team for [DevOpsDays DFW](https://www.devopsdays.org/events/2016-dallas/welcome/) thanks again, [Doug Ireton](/posts/devops-days-dallas), a decision that at the time I considered crazy, but today I’m so grateful for. I was later quite fortuitously invited to [ChefConf](/posts/chefconf) by [Nathen Harvey](https://blog.chef.io/author/nharvey/) through a diversity scholarship in July and fell more in love with this dynamic DevOps culture. While there, I was graciously asked to be on the super cool [Arrested DevOps](https://www.arresteddevops.com/chefconf-2016/) podcast with [Matt Stratton](https://twitter.com/mattstratton) and [Trevor Hess](https://twitter.com/trevorghess). I had such fun talking with them and being on a panel with such great thinkers as [Jon Cowie](https://twitter.com/jonlives) of Etsy and [Fletcher Nichol](https://twitter.com/fnichol) of Chef. The conversation in my head went something like: > How in the hell did I get here? I don’t even have a job. > Oh wait, I’ve been working my ass off to get one. > Right…that’s prolly why. Eff you, imposter syndrome. While I was there I had a conversation with [Barry Crist](https://blog.chef.io/2016/02/09/devops-mainstream-2016/), Chef’s CEO, and I told him about my journey and something that Sheryl Sandberg said in her book. She said that a woman will likely only apply for a job if she’s 100% qualified for a position, but a man will likely apply if he’s only 60% qualified because he knows he can just learn the rest of the 40% soon after. That was mind-blowing to me. I told Barry that I was simply going to start living by the 60% rule, and he gave me his enthusiastic blessing—topped off with a double high five. So when I got home I got a call from [10th Magnitude](https://www.10thmagnitude.com/), one of the most super cool cloud consultancies in the country and where the delightful aforementioned Trevor Hess works, saying that they wanted to talk. I was elated. Knowing that I was probably only 60% qualified, I figured, ‘What the heck, let’s talk.’ (After all, Barry did give me a double high five blessing.) So we talked, then we talked some more, then we talked in Dallas, then we talked in Chicago, then one thing leads to another, and I’m working at 10th Magnitude as a Cloud Automation Engineer! With an art degree! And here’s the thing, never once did I shy away from my art background, my newness to the industry, my time as a stay at home mom, anything. As you can see on my blog, I’m an open book. The diversity of my background is an asset. I am who I am because of those things not in spite of them. And the beauty of the whole thing is that [10th Magnitude’s leadership](https://www.10thmagnitude.com/leadership/) ( shout-out [Alex Brown](https://www.10thmagnitude.com/leadership/alex-brown/) and [Jacob Saunders](https://www.10thmagnitude.com/leadership/jacob-saunders/)) appreciates and embraces that, and that is one reason that they are one of the leading cloud consultancies in the country. They hire for the mind not the résumé. I’m happy to be called a 10th Magnitude Cloud Automation Engineer and still can’t believe this whole crazy story. Thank you, 10 TH Magnitude, for taking a chance on me. I can promise you that it will pay off. --- # Exposing the Unknown URL: https://hedge-ops.com/posts/exposing-the-unknown/ Explore the importance of honesty and open communication in tech teams, especially with new members. Learn effective strategies for building trust and understanding. When someone starts on my team, regardless of their experience level but _especially_ if they’re new to technology, my first goal is to ensure that we are honest in our communication with each other. This is easier said than done. From their perspective, you just gave them this job, and they now have to prove things to you. If they don’t understand what you’re saying but think they might be able to google it later, what’s the benefit in getting you to explain it to them? It’s much safer to just try to figure it out and to bend the truth a little bit on whether they are understanding what you’re saying. In addition, if you’re the senior person who has a handle on this project, to the new person you _feel_ like a magical wizard who is casting spells. Especially if they’re new, they haven’t wrapped their minds around what you work on every day, and so the effortlessness by which you solve problems to them is _extremely intimidating_. We humans are hardwired to _not_ be vulnerable when in extremely frightening and scary situations. We’re hardwired to hide and let the danger pass. So it’s no surprise that lack of honesty is a major initial issue to deal with when working with people who are just getting started. Here’s how I solve it: whenever I start working with someone who is very junior, the first thing I do is find something that I _know_ they don’t know. Then we talk through it and I do the best job I know how of explaining that thing, all the while knowing that it’s probably not sufficient for them to get it because I can’t read their mind. Then I ask them if they get it. They will almost always say, “Yeah that makes sense.” This is when I know we have a great opportunity to establish a solid working relationship, because I know that they don’t get it, but they are telling me they do. So then I say, “Great, then please explain it to me.” And usually they can’t. At this point, I communicate to them that I don’t expect them to know what I’m talking about, even after I explain it to them. Instead, I expect two things from them: (1) that they honestly tell me when they don’t know and (2) that they ask questions that will help them understand. So our relationship of learning is based on trust now: I trust that they’ll be honest and engage with me, and they trust me that I will accept them for not knowing, or even forgetting something I’ve told them. I’ve had so much success with helping people grow when I’ve established this level of trust early on. Instead of months of frustration while they go through the wilderness of confusion and dealing with impostor syndrome, we can get to the bottom of what we need to do and make a plan for getting better. It’s so rewarding to see people light up when you say to them, “I know that you’re not where you want to be; here’s a timeline of how to get you there.” We’re working on real problems now and not using fear to hide! So I encourage you, if you’re working with someone junior to you on your team, have some compassion, understand the fear that is involved, and work hard to create an honest and supportive relationship with them. It will pay dividends for months and years to come. --- # Summer of Discovery URL: https://hedge-ops.com/posts/summer-of-discovery/ Explore the journey of a woman re-entering the workforce and discovering her fit in the technology industry. Follow along as she navigates the challenges and rewards of this exciting career shift. I’ve been hiding out a bit this summer, but hopefully you trust me enough to know that I’ve been up to a lot. Earlier this year, [my wife](/about/annie) decided it was time to go back to work after a decade out of the full time workforce. She considered a lot of different options, but after a few initial experiments we quickly realized that she would be a great fit in technology. I’ve been telling her almost since I met her how great technology is and how much we need people like her in it. She would always tell me the reasons why she didn’t want to do it, like that she wanted to be more creative or that it sounded boring. It always frustrated me because I knew from my own experience that working in technology is one of the most creative and interesting jobs I know of. So Annie [decided to give it a shot](/posts/introduction) after a great dinner with some colleagues of mine at [NCR](http://www.ncr.com) and some good friends at [Chef](/posts/categories/chef). From there, I devoted almost all of my free time to helping her map out the path to take from her art/film background to a full-time technology job. That path included taking advantage of her strengths: [blogging](/about/annie), networking, [connecting with people](https://www.youtube.com/watch?v=U7i4JE4Zk7w), and [getting obsessed about solving problems](/posts/elasticsearch-network-hosts). It also included a lot of growing pains. Some of our experiments failed. In June, we were convinced that it would be great to learn Ruby enough to write custom InSpec resources, but that proved to be too difficult to accomplish at the time. We had to keep thinking about what was working, what wasn’t, and adjusting. I have found the process to be incredibly fascinating and rewarding. For a long time, I have been frustrated by the lack of diversity in technology. How are we going to get past this? Are we going to wait twenty to thirty years for higher education to figure it out? I believe there is a huge transfer of wealth and power to those who can harness technology. Are we going to allow that transfer to unequally go to white men who for their whole lives were _always_ told that they naturally fit into technology? Or are we going to break through the misconceptions and outright falsehoods that permeate our industry and help people take advantage of this fantastically liberating industry? Annie is on her way to a great career; I can already see it. Stay tuned [on her blog](/about/annie) with what her next gig will be…you’ll hear about it soon. Going forward, I want to write a few posts about some things I’ve learned over the years and with Annie on how to work with people who are new to technology. I’ve realized through this experience that it isn’t intuitive for people. If we’re really going to have positive change in including everyone in this fantastic shift in our economy, we are going to have to get _way better_ at helping new people quickly become productive and valuable. --- # Elasticsearch Network Hosts URL: https://hedge-ops.com/posts/elasticsearch-network-hosts/ Explore the process of creating a cookbook to spin up three nodes using Test Kitchen and installing Elasticsearch onto said nodes. Learn how to navigate the complexities of using ohai values, node hashes, and IP addresses in an Elasticsearch cluster. Hello, friends! I’ve missed you. I’ve been a busy bee. I got hired onto a contract-to-hire position at a consultancy for whom I’m working on a Chef project. I’m having a great time because I’m learning so much. To say that I’m drinking from a fire-hose is an understatement. But I definitely want to slow down and share some breakthroughs so that I can remember them for later and hopefully help some of you out along the way. So I was tasked with creating a cookbook to spin up three nodes using Test Kitchen and to install elasticsearch onto said nodes. Easy enough? -\_- Okay, so at first I was just hard-coding the `network_host` in my config because I just wanted to get it to work and I didn’t really know how to get it from [ohai](https://docs.chef.io/ohai.html). Even understanding how attributes work took me a while to get up to speed, so then the complexity of using a complicated ohai value alongside attributes with node hashes and how it affects my kitchen.yml proved challenging for me. Let’s just say there was more than one whiteboard session with my [favorite tutor](http://hedge-ops.com). But I really needed to get it from ohai so that the setup of the cookbook would be simpler. The thing that made it complicated to me was that there were so many IP addresses floating around with my multiple virtual machines in the elasticsearch cluster, and I had a hard time wrapping my mind around which was what. I had three nodes, one of which was a master/host, and I didn’t know which IP address in ohai was going to be the one I needed to use for `network_host`. Finding the proper IP address, however, ended up being simpler than I thought it would be. All I did was SSH into my master node in Kitchen: `kitchen login master` Then, to make it simple to search my ohai data, I needed to save the output to a file. (Grepping it did me no good because I needed the larger context of its location.) So I ran: `ohai >> ohai.txt` Then I opened it in Nano so that I could search for my known IP address: `nano ohai.txt` After it was open, I did a search using `Ctrl +w` for `Where is`. I knew my hard-coded IP address, so I searched for that. When I found it, I was stumped for a minute. ![ElasticSearch network hosts example](/article_images/2016-09-05-elasticsearch-network-hosts/ohai-ip.png) But then I realized that with this information, I could map out the structure in which the necessary IP address was. If I knew that structure, then I could code against that structure to map to my IP address, right? Right. So how to do it? Well, I don’t know how you would have done it, but here’s what [Michael](/about/michael) and I worked out on Labor Day. In a resource that was serving as a default yml for each of my nodes, I had the following code (only showing you the pertinent info). ```ruby interfaces = node['network']['interfaces'] interface_key = interfaces.keys.last addresses = interfaces[interface_key]['addresses'] network_host = nil addresses.each do |key, value| if value['family'] == 'inet' network_host = key end end elasticsearch_configure 'elasticsearch' do configuration( 'network.host' => network_host, end ``` So if we take it chunk by chunk, you can see what we did here. ![ElasticSearch Network Host](/article_images/2016-09-05-elasticsearch-network-hosts/ohai-network.png) When we scrolled up in our `ohai.txt`, we could see that at the top of the tree was the `network` and then the `interfaces` branches. So we needed to start there and climb down. `interfaces` had three different keys: `lo`, `eth0`, and `eth1` - in that order. And our IP address was in the last key for that branch, so you see what we did there. ```ruby interfaces = node['network']['interfaces'] interface_key = interfaces.keys.last ``` So then I wanted to say that my `network_host` IP address was in the same branch of the tree or key that had the `family` key equal to `inet`. ```ruby network_host = nil addresses.each do |key, value| if value['family'] == 'inet' network_host = key end end ``` And that did it! I was able to call that variable in my config. ```ruby elasticsearch_configure 'elasticsearch' do configuration( 'network.host' => network_host, end ``` And call it good. :) ## Concluding Thoughts Everything seems hard until you break it into small, bite-sized, manageable chunks. I didn’t want to deal with this issue, and so I put it off until the very end. But when I sat down, talked it through, and mapped it out with Michael, it was suddenly much more manageable. Sounds a lot like life! --- # My Chef Workflow URL: https://hedge-ops.com/posts/my-chef-workflow/ Discover how to streamline your Chef workflow in this comprehensive guide. Learn how to set up a connection, bootstrap nodes, install policies, and more. Perfect for beginners and experienced users alike. Ever since I started [learning to remediate](/posts/red-green-refactor) my InSpec test failures, I started learning bits and pieces of Chef. I started out in the [Test Kitchen](http://kitchen.ci/), quite appropriately, and since then I’ve been learning how to zoom out little by little to see the forest for the trees. And because this blog serves selfishly as a place to store all of my notes for future use but also a place where noobs ( I really prefer newbs, but whatev) like me can benefit, I’m going to share my current notes for how I started from scratch and now currently maintain the process of Cheffing up my pipeline. Here’s what I’ll do: 1. [Set up connection from local machine to Chef server](/posts/my-chef-workflow#set-up-connection-from-local-machine-to-chef-server) 2. [Bootstrap the node](/posts/my-chef-workflow#bootstrap-node) 3. [Do the Real Work Now](/posts/my-chef-workflow#do-the-real-work-now) 4. [Install and upload policy to Chef server](/posts/my-chef-workflow#install-and-upload-policy-to-chef-server) 5. [Converge the node](/posts/my-chef-workflow#converge-the-node) 6. [Scan for compliance errors on Compliance server](/posts/my-chef-workflow#scan-for-compliance-errors-on-compliance-server) ## Set up connection from local machine to Chef server First, I need to have an account on [manage.chef.io](https://manage.chef.io/login) so that my local computer can talk to their server. This establishes a connection for communication. After that I’m able to upload all of my cookbooks to the server so that I can converge any node I want to with those cookbooks. 1. Create `chef_repo` folder (You can call it whatever you want.) 2. Inside that, create `.chef` folder (You can’t call it whatever you want.) 3. On the Chef server in my web browser, click on _Admin_ tab and click on my organization 4. Click _Generate Knife.rb_ 5. Go to _Users_ and select my user 6. Click _Reset key_ and click _download_ 7. Copy those files from the Download folder to the `.chef` folder 8. Run `knife ssl fetch` ## Bootstrap node The next thing I need to do is set up a line of communication between my the node(s) and the Chef server. This, however, is done on my local machine. The `knife bootstrap` command installs `chef-client` on the node that I’m bootstrapping. This process now allows my node(s) to communicate with the Chef server. And the Chef server makes the cookbooks available to the node(s) for convergence. 1. On my local machine, I’m going to run `knife bootstrap` `knife bootstrap -x -P --sudo --policy-group --policy-name -N ` 2. Now I’m going to go to my node, and I want to ensure that the following lines are in `/etc/chef/client.rb` ```ruby policy_name "\\" policy_group "\\" use_policyfile true ``` ## Do the Real Work Now This is where I’d start if I had already completed the setup of my connection to the Chef server and bootstrapped my node. So for all the rest of my check-ins, I’ll need to double-check that all of this is complete before I make any changes to my policy. 1. Code changes for cookbook - adding / editing cookbook files - adding / editing InSpec profiles 2. Make sure everything passes - Converge all resources successfully in Test Kitchen. - All tests pass. 3. Update version in `metatdata.rb` 4. Commit and push to GitHub 5. Now I’m ready to put this version into production ## Install and upload policy to Chef server First of all, I know that you have two options here. You can go the policy file route, or you can use Berkshelf. [Michael](/about/michael) taught me the policy file route because that’s his preference, and so I tasked him with writing his own blog post to explain why. I’ll update you when he does. So anyway, when I install a Policyfile in a cookbook, I’m then able to tell it all the other cookbooks that I want to run at the same time and where to find them. So then the Chef server knows which cookbooks to put on which nodes because the nodes tell it which policy they have. 1. On your command line, from your cookbook directory in which the Policyfile.rb is located, remove the `Policyfile.lock.json` file if it exists by using `rm Policyfile.lock.json` 2. Run `chef install` 3. Run `chef show-policy` to get the Policy Group name as it shows the active policies for each group. It will be after the asterisk. 4. Run `chef push Policyfile.rb` (may have to use `sudo`) 5. Run `chef show-policy` again to show the active policies for each group. The ID it uses should match the ID inside the json file. (It’s the first number `revision_id`, and it will only give you the first 10 digits.) 6. Commit to Git so that you can have a history of the json for every time you do this. (You can call it `updated policy` in your commit message.) ## Converge the node When I’m converging a node, I’m basically running the `chef-client` command which runs all the recipes and cookbooks that are in the Policyfile on the node(s). 1. On a new terminal, open an ssh session to the node. 2. Run `sudo chef-client` ## Scan for compliance errors on Compliance server After I’ve converged the node(s), then I want to scan them so that I can see what, if anything, still needs to be remediated. You can go here to my [Tour of Chef Compliance](/posts/tour-of-chef-compliance) if you need help remembering how to set it up. 1. Update your version number in the .yml file on your InSpec profile. 2. Compress the profile directory. 3. Upload the latest version of your profile to Compliance. 4. Add your node(s) to be scanned (check if IP address of VMs changed) 5. Scan your node(s) with the latest version of your profile. ## Next Step…Get Jenkins to do this for me ## Concluding Thoughts I’ve had a few conversations recently about how I started my journey into technology in a bit of a backwards manner - learning the upper level stuff like automation instead of the foundational things like networks and hardware and a bunch of stuff that I don’t know. But I really just wanted to start where there was a lowered barrier to entry and where the greatest demand for skills was, and I see that to be automation. My hope is that the foundational things will come in time and that the further I go in this journey, the clearer it will become where I should spend my time learning. --- # InSpec Basics: Day 8 - Regular Expressions URL: https://hedge-ops.com/posts/inspec-basics-8/ Dive into the basics of InSpec and learn how to write a test to search for regular expressions in this Learning InSpec blog series. So you know how when you’re learning stuff and something comes across your radar that you don’t get at all, so you make a mental note to study it later? Well, I have many of those, but the one I’m going to talk about today concerns writing a test to search for regular expressions. As a frame of reference and recap, here are the other InSpec posts that we’ve covered thus far: - Day 1: [Hello World](/posts/inspec-basics-1) - Day 2: [Command Resource](/posts/inspec-basics-2) - Day 3: [File Resource](/posts/inspec-basics-3) - Day 4: [Custom Matchers](/posts/inspec-basics-4) - Day 5: [Creating a Profile](/posts/inspec-basics-5) - Day 6: [Ways to Run It and Places to Store It](/posts/inspec-basics-6) - Day 7: [How to Inherit a Profile from Chef Compliance Server](/posts/inspec-basics-7) The other day a kind githubber [pointed out to me](https://github.com/anniehedgpeth/inspec-workshop/issues/1) that way back in the post about writing a [file resource](/posts/inspec-basics-3), perhaps I should have thought about creating a bit more of a strict search criteria when I was searching for a regex in a file. I totally agree with him; I just didn’t know how back when I wrote the first one. The basic gist of the control was to search a yum.conf file for `gpgcheck=1`. Easy enough, right? Well, this is the whole control that I ended up with: ```ruby control "cis-1-2-2" do impact 1.0 title "1.2.2 Verify that gpgcheck is Globally Activated (Scored)" desc "The gpgcheck option, found in the main section of the /etc/yum.conf file determines if an RPM package's signature is always checked prior to its installation." describe file('/etc/yum.conf') do its('content') { should match /gpgcheck=1/ } end end ``` Some of you know exactly what’s wrong with that, and others of you are like I was—called it good. But let’s ask some questions to find out where the holes are. 1. What if there are a bunch of spaces before, after, or in between that text? Does that matter? 2. What if there is any other text before or after that regex? For example `gpgcheck=12`, or it’s commented out. Well, thankfully [Mr. Lovitt](https://twitter.com/lovitt) at [Rubular](http://rubular.com/) gave us a little cheat sheet. ![Rubular Cheat Sheet](/article_images/2016-08-01-inspec-basics-8/regex.png) So now, to address my questions above 1. If I want to allow for any amount of white space to precede and/or follow the text, I can add `\s*`. The `\s` means white spaces, and the `*` means any amount. 2. If I want to disallow for anything before or after my regex, then I can add `^` to specify the beginning of the line and `$` to specify the end of the line. I can also ignore anything commented out _after_ my regex with `(#.*)`. We can also use the `?` after that because, according to [RegExr](http://regexr.com/), it “makes the preceding quantifier lazy, causing it to match as few characters as possible.” Now I can use this guy, and it will be as strict as I need it to be so as not get a false pass on my test. ```ruby its('content') { should match /^\s*fs.suid_dumpable = 0\s*(#.*)?$/ } ``` Obvi, just use the cheat sheet to find what’s right for your search. ## But Another thing that my githubber friend and I were discussing, however, is that perhaps searching for regexes are a bit too messy. First of all, they’re ugly. Am I right? It took me quite a while just to figure out what it meant. And secondly, you put yourself at greater risk for false passed tests with all the gobbly-gook, and no one wants that. Therefore, if there’s a better way to test, then do your best to find it. In this case, it’s possible that using the `parse_config` resource could be a better test but more on that another time! ## Concluding Thoughts There’s usually a better way to do everything, but that shouldn’t stop us from doing what we can to get started. I think that sometimes people get overwhelmed because they think they have to be at the [ _refactored_](/posts/red-green-refactor) state from the beginning, but that’s not the ideal way to grow - whether you’re learning InSpec or starting a business. We start small, manageable, and simple, and we grow and perfect from there, accepting our mistakes as learning tools. Also, refactoring, which takes time, effort, and discipline, can’t be done without laying the simple groundwork first. And in the end, hopefully, you come up with something that is meaningful and lasting. Go to Day 9: [Attributes](/posts/inspec-basics-9) --- # ChefConf 2016: Lessons Learned URL: https://hedge-ops.com/posts/chefconf/ Explore key takeaways from ChefConf 2016, including the role of security in DevOps, the future of the industry, and personal insights on career development in tech. I was so thrilled this week to be able to attend [ChefConf 2016](https://chefconf.chef.io/)! There were so many cool things about the week that got me super excited about the coming year. But also, there were things that made me excited about the future of the industry and how I can be a meaningful part of it. First of all, let me just say that I was a little hesitant about going. I’m new to the industry, and I do _not_ know everything about everything. But something I learned while I was there is that no one does! All it takes is a willingness and drive to learn more and more each day. That’s all anyone is doing. ![Chef Conf](/article_images/2016-07-15-chefconf/chefconf2016.png) For those that are new to technology, the feeling that everyone knows infinitely more than you can be really daunting. It’s tempting for it to feel a bit insurmountable, but I remind myself, and you, that all we can do is keep [forward momentum](https://youtu.be/h_hsQyk74k4) going and we’re solid. So the reasons that I went were varied. And many of my reasons can be better informed by [my last blog post](/posts/inspec-basics-8). 1. [I wanted to listen for the DevOpsSec-cultural issues surrounded involving security.](/posts/chefconf#devopssec) 2. [I wanted to get a feel for where security fits into devops overall.](/posts/chefconf#fitting-in) 3. [I wanted to have my suspicions validated as to where security is headed.](/posts/chefconf#where-security-is-headed) 4. [I wanted to get a larger perspective for the industry to validate where I’m headed with my career.](/posts/chefconf#where-im-headed) ## DevOpsSec On the first day of ChefConf I attended the Community Summit which was a big [open space format](http://www.devopsdays.org/open-space-format/) discussion. So whoever wishes to, suggests a topic, and then everyone votes for what to discuss. I suggested that we discuss security and compliance and the issues surrounding getting that included into the pipeline. I honestly thought it was going to be a hot topic—high hopes. Turns out no one cared—well, seven people cared. But still. I was surprised. I thought it was a bigger deal than was communicated by the lack of votes. Still, I wasn’t discouraged but rather spurred on to spread the good news of [Compliance](/posts/tour-of-chef-compliance). As the week progressed, however, I found that it was definitely a topic that was of concern at a higher organizational level. Large enterprises are noting security and compliance as a bottleneck and are pushing for improvement. Therefore, those in leadership see a deeper focus on security automation as a very great opportunity for improvement. At 10:12, Barry Crist addresses this topic so well that I would have sworn he read my [last post](/posts/inspec-basics-8). At 13:33 he specifically addresses Compliance. Turns out he’s been on the same journey of discovery! How encouraging is that! ## Fitting In So I was interested to see how security and compliance were fitting into devops overall. DevOpsSec or DevSecOps or whatever you want to call it, has been a thing for a while now, but is security really as integrated as it needs to be into the ethos of devops strategies within organizations—not just up top but with your engineers, developers, sysadmins, architects, etc.? Short answer, I’ve found, is that it depends on the organization. Some have really embraced the challenge and have started doing it quite well, like [Optum](https://chefconf2016.sched.org/speaker/odie_routh.1v2stk3s) and [NCR](https://chefconf2016.sched.org/speaker/michael_hedgepeth.1v5jkgrw), and are becoming shining examples for others to emulate. For others, however, it seems like it’s still the stereotypical nuisance that is getting tacked on to the end of production. That just tells me that there is still a lot of growth to happen and a lot of room for more education and change. ## Where Security is Headed That said, it looks like dev and ops folks will come on board soon enough because companies are asking for it, leadership is pushing for it, and security automation is becoming more obviously necessary. Feels good, too, because it seems like I’m [on track](/posts/inspec-basics-8). ![Odie Routh Speaking at ChefConf](/article_images/2016-07-15-chefconf/roadahead.png) ## Where I’m Headed With all of that, it made me feel pretty good about the choice that I’m making to be more focused on security automation for a career start. I still think it’s an interesting problem to tackle, and even while I was there at ChefConf, I enjoyed getting people together to discuss the issues surrounding putting the Sec in DevOpsSec while I was there. It’s a multifaceted issue that will take some finesse within each organization to unravel. So I hope to start unraveling soon! --- # InSpec and Me URL: https://hedge-ops.com/posts/inspec-and-me/ Explore my journey learning Ruby and experimenting with InSpec and Chef. Discover how InSpec bridges the gap between Security and DevOps, and how I aim to fill a new role in this space. Hello my friends. If you’ve noticed that I’ve slowed down on the [InSpec](https://github.com/chef/inspec) goodness lately, it’s because I’ve been [learning Ruby](/posts/learning-ruby) and [experimenting](https://github.com/chef/chef/pull/5066) with [fixing bugs](https://github.com/chef/inspec/pull/810) and whatnot. It’s really exciting to see what I can do with Chef and InSpec once I get a good grasp on Ruby, but I underestimated the learning curve just a little. No worries, no hurries! Check out all we’ve covered so far: - Day 1: [Hello World](/posts/inspec-basics-1) - Day 2: [Command Resource](/posts/inspec-basics-2) - Day 3: [File Resource](/posts/inspec-basics-3) - Day 4: [Custom Matchers](/posts/inspec-basics-4) - Day 5: [Creating a Profile](/posts/inspec-basics-5) - Day 6: [Ways to Run It and Places to Store It](/posts/inspec-basics-6) - Day 7: [How to Inherit a Profile from Chef Compliance Server](/posts/inspec-basics-7) All of this [talking about InSpec](/posts/inspec-basics-1) has had me thinking about things, like: - [How InSpec bridges the divide between Security and DevOps](/posts/inspec-and-me#bridging-the-divide-between-security-and-devops) - [How I can see a certain role forming within Security teams that wasn’t there before](/posts/inspec-and-me#the-role-that-wasnt-there-before) - [How I’d like to fill that role](/posts/inspec-and-me#how-id-like-to-fill-that-role) ## Bridging the divide between Security and Devops For those organizations whose software security initiatives (SSI) employ security automation into the software lifecycle from inception to deployment to maintenance, then InSpec is perfect, right? Of course. (Let’s not forget you can use InSpec with Puppet, too.) You want your Software Security Group (SSG) to be structured properly and everyone fully invested. ![SSG relationship with others](/article_images/2016-07-02-inspec-and-me/SSG.png) But let’s remember one of the many reasons why [The Phoenix Project](https://www.amazon.com/dp/B00AZRBLHO/ref=dp-kindle-redirect?_encoding=UTF8&btkr=1#navbar) hit home with so many people - the strained relationship, lack of trust, and back-biting between security and development. It’s a real thing, hopefully not in your company, but probably. You know the story - the good folks in Security and Compliance want to ensure compliance, so they have PDF after PDFdocumenting what they need for said compliance. There is then built into their culture a mistrust of outsiders because of their audit function; their process mustn’t be corrupted. Development, on the other hand, wants to get their software shipped as quickly as possible, so they look at the PDFs and say, “Yeah, yeah, of course, we’ll get to that.” But, of course, their need for speed is greater than their desire for compliance because they don’t have a full picture of the requirements and their company’s need for them. And so ensues the stereotypical strained security-development relationship. The problem lies in that the two groups are speaking different languages. Security speaks PDF, and Development speaks code. So how do we translate for these two groups? (I think you see what’s coming here.) Ahem…InSpec does a great job at this. But how can your DevOps people convince your Security people that this is the way to go? As I’ve seen it played out, this is not as easy as it may seem. It takes a bit of _finesse_. These two groups in your company may have been at odds for so long that it requires diplomatic and empathetic soft skills on the part of the DevOps person managing this affair - someone who can convince the rest of Development that security matters and that it’s not minutia and who can convince Security that automation will make everyone’s lives so much easier and enable them to focus on the higher level issues instead of staying in the weeds all the time. An openness to change and learn is necessary for all involved, from the top down. Once the most empathetic DevOps person convinces Security to give it a go, they can use the magic of [Chef Compliance](/posts/tour-of-chef-compliance) to most likely get them to about 80% compliant with their corporate security initiatives using the built-in profiles to scan against. This will give all people involved a greater sense of their current state of affairs, and Development can start remediating the heck out of things instead of spending all their time confused about what the security risks are. I can pretty much promise that this will get people excited, and things will start rolling with DevOps and Security working with each other instead of against each other. ## The role that wasn’t there before Because each organization is unique and has their own security policies, we’ll probably still have around 20% of compliance issues with which to contend. This is where, in my opinion, someone will need to swoop in. This person will need to create custom resources and profiles specific to their company’s policies and incorporate that job into the audit function within their security department. The way I see it, though, this is not your typical security person. This person will still need to research the latest and greatest security issues, but instead of putting them into aPDF, she’ll be putting them in code. “Wait, what?! A _developer_ on the _security_ team?!” Well, why not? This person would still need to understand security at a deep level, but their main objective would be to understand the language of both worlds and bring them together. DevOps would serve a support function, but since Security would still need to be in control of the audit process, this person would bring the necessary additional skill-set for automation and how to drive it. Because this gives Security the ability to focus on higher level security matters, the cost of hiring on another person to fill this role is a no-brainer to me. It totally takes their company to the next level of security and compliance. ## How I’d like to fill that role I am, of course, a little biased because, if you couldn’t tell by now, I’d love to be that person. Now I didn’t come to that conclusion overnight. I have [stressed and stressed](/posts/introduction) over how I can be used to my fullest potential. I have all these soft skills, creativity, and overwhelming need to problem-solve that weren’t being fully utilized in my other career paths. So I feel so very grateful to the InSpec team because they really lowered the barrier to entry into technology for me, so now I can now use my strengths in a field in which I’ll have more diverse thinking to bring to the table, having _not_ spent the last twenty years immersed in technology. Being new to the DevOps and open-source community, I find it so very refreshing and inviting. Remember me saying that I was an [Airbnb superhost](/posts/red-green-refactor)? Well, I _love_ the sharing economy. It truly gives me hope for my kids’ futures. And that same heart is what I’ve found in the DevOps and open-source community. I feel like you (said community) really _get_ that there is talent out there waiting to be unleashed, and you’re doing everything in your power to welcome noobs like me into the fold so that potential can be realized. In addition to InSpec, I’m grateful to [GitHub](https://github.com/) for being the other great factor in lowering the barrier to entry for me. [Michael](/about/michael)’s [company](http://www.ncr.com) is going through this transformation that I was portraying above, and so, being the good DevOps/Security liaison, we hosted a dinner at our house with his security colleagues, his boss, a lovely VP at Chef, and the two authors of InSpec, [Christoph Hartmann](https://twitter.com/chri_hartmann) and [Dominik Richter](https://twitter.com/arlimus). Honestly, they talked and I listened and observed. Then afterward Michael and I talked some more about the cultural dilemma that exists between Security and DevOps, and I wanted to _fix it!_ It’s such an interesting and complicated problem, and it’s seriously calling my name to come and solve it. But had it not been for GitHub, that dinner conversation would have ended there, no more story. Instead, I was able to start learning InSpec and even give back by writing about it on this blog which is hosted by [GitHub Pages](https://pages.github.com/). I could also interact with Chef and InSpec friends on GitHub and then deepen my understanding by trying to fix bugs. As someone who is new to the industry, this is incredibly helpful because I can point prospective employers to [my GitHub page](https://github.com/anniehedgpeth) instead of handing them a one-page document of my experience. These opportunities would not have existed before GitHub. And so, my story continues, and we’ll see where it goes from here. I’d be very interested to see if my predictions come to pass in the industry as this need for the DevOpsSec position grows as the need for security automation grows. And we’ll see where I end up. I’ll definitely be solving some problems _somewhere_, just don’t know _where_ yet. P.S. Thank you to all my Twitter friends who retweeted that [I’m on the prowl](https://twitter.com/anniehedgie/status/748643963431587840) for a job in Chef/InSpec/Security! Feeling the love. P.S.S. I’ll be at [ChefConf](https://chefconf2016.eventcore.com/) next week, so give me a shout-out if you’ll be there! Go to Day 8: [Regular Expressions](/posts/inspec-basics-8) --- # Why and How I’m Learning Ruby URL: https://hedge-ops.com/posts/learning-ruby/ Discover the journey of learning Ruby programming from scratch. This blog post shares personal experiences, challenges, and resources used in mastering Ruby. I’ve had my head in the books lately, and guess what I learned? I learned that I cannot learn [Ruby](https://www.ruby-lang.org/en/) and the fundamentals of programming in a week. I had high hopes of learning _enough_ to be able to write a custom resource for [InSpec](https://github.com/chef/inspec), but, as you probably know, that was silly. Last summer we had to drain our pool and refill it, and if you’ve ever done that, you’ll know that it takes much longer than you’d think. The scary thing, however, is that Texas soil is known for [popping empty pools](https://www.google.com/search?q=pool+popping+out+of+ground&espv=2&biw=1472&bih=981&tbm=isch&tbo=u&source=univ&sa=X&ved=0ahUKEwjN9e6zl8LNAhUj0YMKHZGIAVkQsAQIGw) straight out of the ground. But if you just keep the hose in there (yes, a hose, we’re old school), then it eventually fills up. So that’s kind of my approach to learning Ruby: 1. _Create a crisis._ 2. _Just keep plugging away._ The key to all of my learning is to create a crisis with a deadline (i.e. don’t want the pool to pop out). If I didn’t have a little bit of managed stress and pressure, then I would struggle with motivation (i.e. maybe wouldn’t want to use all that water). I then find myself telling you and [Mr. Hartmann](https://twitter.com/chri_hartmann) that I’m going to write a blog post about writing a custom InSpec resource in Ruby, and there I have it—the proper amount of stress and motivation to learn Ruby. I haven’t coded anything since high school computer science class where we had to print out our PASCAL code onto continuous feed paper (I can still hear that awful printer in my mind) and turn it in to be graded. Something tells me they don’t do it that way anymore. Needless to say, a lot has changed since then. So going from nothing to Ruby is a little daunting, so I just started plugging away. I started with [Learn Ruby The Hard Way](http://learnrubythehardway.org/book/), but quickly realized that I had to zoom out a bunch because I couldn’t see the big picture. So I dug into [Computer Science Programming Basics in Ruby](http://shop.oreilly.com/product/0636920028192.do). Once I had the big picture and the vocabulary in my head, then I could go back to the exercises in LRTHW. ![Unpacking](/article_images/2016-06-24-learning-ruby/unpacking.png) [As I’ve told you](/posts/introduction), I’m preparing to rejoin the workforce soon since I’m about to be a pre-school empty nester (read: my youngest is going into kindergarten), and I want to have as many tools and skills at my disposal as possible. I’m seeing sort of a fuzzy picture of a career in security automation, perhaps on the development side of things. Honestly, I don’t know, but that’s sort of the direction that seems correct right now. What I do know is that I’m very excited about learning Ruby and the opportunities that it affords me. My existence has to include creating things and solving problems—has to. It’s what makes me come alive and engages me more than anything. And it’s both a great surprise and joy to me that I can experience the same sort of satisfaction in coding that I can in creating a [piece of furniture or art](https://www.instagram.com/explore/tags/reclaimedhomeinteriors/). --- # Finding Alignment URL: https://hedge-ops.com/posts/finding-alignment/ Explore the keys to finding alignment in business initiatives. Learn how respect and empathy can help bridge gaps and foster collaboration. Discover how different teams align with DevOps. I’ve got a lot of things on my plate right now, but let me be clear: I’m not going to stand in the way of what the business wants to do. I’ve been told this many times before. In the early days of trying to create alignment for initiatives that are important to our strategy, I would have taken it as a sign of support. Instead, I translate it into what this same person would say to her boss about this same subject. “Michael came and talked to me about \[Awesome Initiative\]. (Sigh) He says it’s going to really change things for us. Are we really going to waste our time with this crap or are we going to serve our customers? I’ll do whatever, but fair warning: this project has ‘disaster’ written all over it, and you won’t get that feature you’ve been asking for either, you can kiss that goodbye for this year.” That’s quite a different story, isn’t it? So how to respond? I’ve been tempted in the past to write people off and try to do it anyway. I’ve found over the years though that this is a mistake, because a sufficiently motivated individual _can_ and _will_ destroy your project if they don’t feel listened to. I’ve found that the keys are _respect_ and _empathy_. _Respect_ can’t be faked. I can’t _really_ only respect my opinion, or those of the consultants I’m working with, and then merely try to get _you_ on board with _my_ project. That doesn’t work. People see past that. Instead, I need to walk into the relationship with the understanding and true belief that you are a smart person with real needs that may be solvable by what I’m working on. This is a discovery of where your needs meet my solutions. I’m open to changing my solution because I cannot possibly predict all of your needs without your iterative interaction. _Empathy_ is key because I really need to _feel_ what you’re feeling, from a business perspective, and even from a personal perspective. What makes you excited or happy? What are you afraid of? What is painful for you? How does that affect you? Is there anything I can do to help you with that? Following this pattern has led me to understand that, at a generic level, different groups have a natural alignment and misalignment with DevOps: | Team | Natural Alignment | Natural Misalignment | | ----------- | ------------------------------------------------------------------------------------------------ | --------------------------------------------------- | | Development | Faster Delivery of features | Have to be engaged in operations, more “work” to do | | Operations | Less fires, more consistency | Have to learn a new skillset and be a beginner | | Security | More consistency, compliance | Automation can cause unknown vulnerabilities | | Business | Faster ROI for development, lower cost for operations, and a scale model that works | Takes ongoing investment in culture and tools | As I focus on respect and empathy I can find the natural alignment and mitigate the natural misalignment with each group. _For developers,_ we can help them deliver features faster and thus get a truly agile feedback loop. But we can also help them culturally begin to share the ownership of the operability of their product with the operations team. The key is ensuring that their desire for consistency and velocity outweighs their disdain for caring about how it works in production. We do this by finding their pain points in operations, catch them _blaming_ operations for those pain points, then show them that they actually can do something about their problem. _For QA,_ we can help them deliver features more safely by minimizing the amount of time they spend creating environments and ensuring that they have consistent, hardened environments. This is all exciting, but they must engage the other teams to define infrastructure, and they must be willing to not be lazy and just click the damn checkbox in IIS when they want something changed. Culturally they shouldn’t have a problem with this, but in reality QA always gets changes at the end of a release or sprint and is under a time crunch to ensure quality. So we work with them to show them how much time they are spending on this and how we can use automation to get them focused on the important stuff: ensuring quality in a way that computers can’t or writing automation against production-like environments. _For Operations,_ we can help them avoid the unexpected problems that come about without good automated configuration management. But to them, it comes at a cost. Many of the things that made them successful (incident resolution, working crazy hours, following directions) aren’t compatible with creating and engaging with automation. So they have to be fine with starting from scratch and building themselves back up. They have to see a payoff if they’re going to take that kind of career and time risk. The payoff in my mind is that they get to do more valuable work for the business by automating things and that value will translate into a more rewarding job for them. _For Security,_ we can help them achieve the consistency and compliance at all levels that has alluded them. Yes they have controls defined and exercised in production. But they struggle with other teams thinking of them as a blocker to progress. Also, as we automate more, they’re afraid that we are going to take [this nice shiny devops Ferrari and wreck it into a wall.](http://www.dailymail.co.uk/news/article-3022707/Worst-valet-Hapless-garage-attendant-destroys-300-000-Ferrari-599-GTO-bringing-round-owners-hit-accelerator-instead-brake.html) So we work on automating compliance and on developing a change management process that ensures proper control and separation of duties into production. We don’t fight them. We work with them to achieve _their_ goals. Once this happens, the perceived blocker becomes a champion and driver of our changes. _For Management,_ they get the benefits outlined above. But they also have to lead a cultural transformation. We can’t do it the easy way when that means _just running the command_ or _clicking that checkbox over there_. We have to be committed to a repeatable, auditable process for change. So we need to train the people we have with new skills and be patient as they figure this out. We need to be OK with someone spending a little while on automating that thing, because the payoff will be huge over the next few years. And we need to be OK with absorbing the cost of tooling and vendor partnerships to realize the dream. Once we see things this way, it’s much easier to get everyone to the goal together. While I can’t guarantee that one won’t encounter the statement at the beginning of the post, I can guarantee that following this model will give someone something _constructive_ to do afterward. --- # Finding Habitat URL: https://hedge-ops.com/posts/finding-habitat/ Explore the world of Habitat, a disruptive technology that revolutionizes configuration management. Learn about its integration with Chef’s ecosystem and how it can streamline your workflow. A few months ago I caught up with Julian Dunn in Ghent about what he was up to. His [talk on orchestration](https://www.youtube.com/watch?v=kfF9IATUask) was instrumental in forming [our approach](/posts/orchestration-maturity-model-with-chef) to solving the problem with Consul and his [blog post on docker](http://www.juliandunn.net/2015/12/04/the-oncoming-train-of-enterprise-container-deployments/) showed me he was thinking deeply and critically about some interesting topics. I reached out to him to learn more about what he was up to and spent some time with him to learn about Habitat. When I learned that Fletcher Nichol was also working on the project, I got even more excited. Fletcher’s work on [Test Kitchen](http://kitchen.ci/) has [revolutionized our workflow](/test-kitchen-required-not-optional). Recently, I saw it lower the barrier to entry for [my wife](/about/annie) to learn Chef. There really is a _before Kitchen_ and _after Kitchen_ epoch in her learning. It’s that revolutionary. And to see that Fletcher was focusing on this problem as well was quite exciting. [Adam Jacob’s blog post](https://www.chef.io/blog/2016/06/14/introducing-habitat/) left me both intrigued me and a little confused. I wanted to understand what Habitat was and how it fit into Chef’s infrastructure and strategy. So I watched the event and got on Twitter and had a fun time figuring it out. It was clear to me even from my early talks with Julian that Habitat was a disruptive technology. This is yet another reason why [Chef is such a good partner for us](/posts/technology-partnership). I can trust them to prioritize _the right solution_ for me over whether this will help their sales numbers this quarter. That trust drives sales higher than other companies because navigating this journey is difficult, and we need people on our side who will tell us the hard truths on how to arrive at our destination. So kudos to Chef and its leadership for being so brave to make this step that says _there’s another aspect to the problem that could be better, here’s what we think_. Unfortunately, the initial message of Habitat didn’t resonate with me. I feel that it suffered from a few flaws that hurt its ability to resonate with enterprise customers like me: - _You’re doing it wrong_. I reread the narrative about the siloed enterprise and the big web and am still struggling to understand it. What I felt initially is _how you have been approaching the problem of configuration management is all wrong_. I’m honestly not sure whether that should have been the feeling I got because honestly I still don’t understand the narrative they were going for. I do know that a good pitch to enterprise people _should not_ start with the message _your organization is fundamentally improperly structured_ People don’t like to hear that. And even if they agree, there is nothing they can do about it. The message (intended or not) isn’t necessary because Habitat doesn’t ask you to change any of that (see my revised message below). - _Here is a Cool Solution._ And it is. You put some really great people on this project for months or more, I expect it to be cool. But from an enterprise perspective its coolness carries little weight on whether it will help us solve the problems we are having. I would have preferred some discussion on what outcomes the team was able to accomplish as they partnered with an early customer, preferably from the enterprise. What did they do? Is it better? Why? Without this context it was difficult to put the solution in context. - _Context?_ This was the part I struggled with the most. How does Habitat fit into the Chef ecosystem? Where does it start and Chef begin? What problems does it solve that _both_ products could solve? Why would I choose one over the other? It was difficult to understand, especially over the medium of twitter where characters are limited and tone isn’t easily communicated. Am I still excited about Habitat? Absolutely! After talking with a few people and with Julian for 15 minutes or so, I can now think about it in terms that make sense to me that I can share with others in our organization. In the spirit of providing alternatives when sharing problems, here’s the pitch I would give if someone in my organization would ask me about it today: > Chef’s has done an outstanding job with configuration management of infrastructure. This is why they are our partner. > They have built upon that core competency with a reporting product to see what’s happening and a delivery product to > manage changes. On top of that (and most importantly for us), they even help make your infrastructure more secure by > helping you scan your infrastructure for security vulnerabilities and use Chef to remediate them. With Chef, it is > very easy to get a secure, hardened infrastructure configured for your business. > > That’s not all you want to do, though. You want applications _running_ on that infrastructure. And it turns out when > you start down this path, things get complicated quickly. You face problems that aren’t _really_ configuration > management problems like orchestration or service discovery. You have to figure out how to scale. And you have an > application team that wants to focus on _those issues_ rather than configure a specific machine to run. They might > even insist on _not_ running a production-like machine for their development by insisting that they use docker to > speed up development. How do you engage the application team in a way that helps them own their solution and use it, > then deliver that automation to a broader ecosystem in a meaningful way? > > Enter Habitat. With Habitat, your application team can define availability, upgrade, red/green deployments, and other > application-level-concerns and package that _with the application_ and deliver it to their target environments. This > means that Chef can focus what it’s good at: configuration management of the infrastructure. A habitat package can > live as a docker container on a development machine, a minimal QA environment, or as a full-blown linux > node which was also configured using Chef. > > It’s tempting to try to find the one solution that will solve all of your problems. Many times that leaves you doing > _a lot of work_ as you try to solve a problem with a solution that was not meant to solve those types of problems. > Instead, it’s totally fine to have a solution to the application’s problems and a different solution for the > infrastructure problems. As long as both solutions start with code, are tested early and often, and meet together very > quickly, we can take advantage of their differentiated power. This, to me is Habitat’s story and is what makes me so excited for its future and so happy that I’m a partner with Chef. --- # InSpec Basics: Day 7 - How to Inherit a Profile from Chef Compliance Server URL: https://hedge-ops.com/posts/inspec-basics-7/ Learn how to inherit a profile from Chef Compliance Server with our InSpec Basics tutorial. Modify controls to suit your needs and enhance your Chef Compliance usage. I’m back again today with yet another InSpec tutorial. As always, if you haven’t dipped your toe into the [InSpec](https://github.com/chef/inspec) pool yet, now you can: - Day 1: [Hello World](/posts/inspec-basics-1) - Day 2: [Command Resource](/posts/inspec-basics-2) - Day 3: [File Resource](/posts/inspec-basics-3) - Day 4: [Custom Matchers](/posts/inspec-basics-4) - Day 5: [Creating a Profile](/posts/inspec-basics-5) - Day 6: [Ways to Run It and Places to Store It](/posts/inspec-basics-6) Perhaps you’ve been using Compliance, but the profiles in there are not exactly what you need. Maybe you want to take a few controls out and add a few others. Today we’ll be discussing how you can do that by inheriting a profile to modify for use in [Chef Compliance](https://www.chef.io/compliance/). It’s pretty simple to do; the only catch is that you have to use it within the Compliance server, nowhere else. It would be pretty cool if you could inherit a profile to use with the audit cookbook or in Kitchen, but they’re not quite ready with the new dependency management feature yet. I’ll update this post when I hear that it’s there. ## Overview 1. [Determine which controls are not needed from the Compliance server profile](/posts/inspec-basics-7#determine-which-controls-are-not-needed-from-the-compliance-server-profile) 2. [Change the controls in an inherited profile](/posts/inspec-basics-7#change-the-controls-in-an-inherited-profile) 3. [Using the inherited profile on Chef Compliance](/posts/inspec-basics-7#using-the-inherited-profile-on-chef-compliance) ## Determine which controls are not needed from the Compliance server profile You have a failing report because there are a bunch of controls in the profile that either you don’t need or you need them to be different. Because of that, you’ll need to know how to change what you need to change to get the job done for your company and their needs. Let’s look at an example. We can go to our Compliance dashboard, find a report, and take a look at the failures: ![](https://ik.imagekit.io/hedgeops/site/article_images/2016-06-14-inspec-basics-7/failure.png) For this tutorial, let’s focus on the first one: Set Password Expiration Days. Let’s say, also, that you want the password expiration to be set to 30 days instead of 90. The way we’ll do that is by scanning with an inherited version of that profile that _ignores_ that particular control and _adds_ another control that tests for 30 days. Let’s go find it. We can see that it was the `cis-ubuntu14.04lts-level1` profile, so let’s go to the Compliance tab and find that profile. Click on it, and find the offending control. ![](https://ik.imagekit.io/hedgeops/site/article_images/2016-06-14-inspec-basics-7/control.png) What you’d do here is make a list of all the controls that you’d need to change. Right now, we just need this one, so I’m going to copy and paste that one control name. ## Change the controls in an inherited profile First we’ll need to go back to our command line and get started by creating a new profile. ```shell inspec init profile ``` ![](https://ik.imagekit.io/hedgeops/site/article_images/2016-06-14-inspec-basics-7/profile.png) Now we can go open that in our editor and open up our example.rb file. We’ll find this already there. ```ruby describe file('/tmp') do it { should be_directory } end # you add controls here control 'tmp-1.0' do # A unique ID for this control impact 0.7 # The criticality, if this control fails. title 'Create /tmp directory' # A human-readable title desc 'An optional description...' describe file('/tmp') do # The actual test it { should be_directory } end end ``` But the nice guys at InSpec have also given us this handy little control from [their git page](https://github.com/chef/inspec/blob/master/examples/inheritance/controls/example.rb), so let’s copy that. ```ruby include_controls 'profile' do skip_control 'tmp-1.0' end ``` So I want to tell it to still use that profile, but skip the offending control. But I’m also going to add another control that is specific to my company’s needs. So I’m just copying the old one exactly and changing the number of days for which it’s testing. But be mindful! Obviously, that’s not going to work if I’m telling it to skip the control, and then I don’t change the name of the control that I’m adding, right? So notice that I added `To_30` to the end of the control name that I’m adding. ```ruby include_controls 'cis/cis-ubuntu14.04lts-level1' do skip_control 'xccdf_org.cisecurity.benchmarks_rule_10.1.1_Set_Password_Expiration_Days' control "xccdf_org.cisecurity.benchmarks_rule_10.1.1_Set_Password_Expiration_Days_To_30" do title "Set Password Expiration Days" desc "The PASS_MAX_DAYS parameter in /etc/login.defs allows an administrator to force passwords to expire once they reach a defined age. It is recommended that the PASS_MAX_DAYS parameter be set to less than or equal to 30 days." impact 1.0 describe file("/etc/login.defs") do its(:content) { should match /^\s*PASS_MAX_DAYS\s+30/ } end end end ``` ## Using the inherited profile on Chef Compliance You should be good to go now. All you need to do is zip up your profile, upload it to Chef Compliance, and run it! There you see the control that we changed. ![](https://ik.imagekit.io/hedgeops/site/article_images/2016-06-14-inspec-basics-7/compliance.png) ## Concluding Thoughts I think that the ability to create inherited profiles is absolutely necessary when seriously using Chef Compliance. It will be even better when they develop the dependency management feature so that we can use these inherited profiles outside of Compliance. I did have a few minor issues with this process that I’m sure they’ll fix soon, but it’s something that you can be aware of, so it doesn’t slow you down. First of all, I had a really minor error; I was missing an `end` and didn’t notice. So when I tried to upload my compressed profile, it didn’t do anything - not even give me an error message that I could understand. It took me creating an [embarrassingly simple issue on GitHub](https://github.com/chef/inspec/issues/789) to learn of my typo. The other thing that was kind of a bummer is that I couldn’t run it locally first before uploading it. So that embarrassingly tiny error went unnoticed and caused me a bit of a headache. All in all, errors aside, the process was pretty simple, and it taught me a new concept since I didn’t even know what inheritance was before I learned this process. Go to Day 8: [Regular Expressions](/posts/inspec-basics-8) --- # Disruption is Uncomfortable URL: https://hedge-ops.com/posts/disruption-is-uncomfortable/ Explore how disruption can lead to discomfort in a DevOps environment, and learn how to navigate change effectively. Discover the importance of empathy and communication in managing disruptive changes. The other day we were doing an urgent Chef policy update in our production environment. It quickly became clear to all that this was uncomfortable. I had written extensive documentation to explain, as I saw it, how interact with Chef within our system. The person who was assigned the task, however, wasn’t able to make much sense of my documentation. So we had an awkward back and forth where he was asking me for the commands he should run, and I was telling him that he should know what he is doing when running those commands. It was all very uncomfortable. So I talked to a colleague about it who reminded me: “Hey Michael, you’re doing something disruptive here, you shouldn’t be surprised when it is…disruptive” Oh, yeah. I then calmed down, got another colleague on my immediate engineering team on a call, and we figured out a better way to bring people on board with the process. We didn’t get angry. We didn’t go to lunch and talk about how idiotic that other team was. We realized that disruption is uncomfortable, so we can empathize with that and help people along. I think this has been a major part of my job: helping people through the discomfort of change that DevOps presents to their status quo. --- # InSpec Basics: Day 6 - Ways to Run It and Places to Store It URL: https://hedge-ops.com/posts/inspec-basics-6/ Explore the different ways to run InSpec and the various places to store it in this comprehensive guide. From running and storing InSpec locally to using Chef Compliance, this blog post covers it all. Hello my friends. I hope you’re back for some [InSpec](https://github.com/chef/inspec) goodness. I’ve missed [talking about InSpec](/posts/inspec-basics-1)! Check out all we’ve covered so far: - Day 1: [Hello World](/posts/inspec-basics-1) - Day 2: [Command Resource](/posts/inspec-basics-2) - Day 3: [File Resource](/posts/inspec-basics-3) - Day 4: [Custom Matchers](/posts/inspec-basics-4) - Day 5: [Creating a Profile](/posts/inspec-basics-5) I’ve been quite occupied lately building my skill-set with some studying up on Linux, Chef, Kitchen, remediation workflow, and a little bit of Ruby so that I can use InSpec in a broader sense. No big. Seriously, though, starting from scratch is not easy, but it’s definitely not boring, either. I’m not exactly giving you another tutorial today, but instead I want to step back a little bit to get a broader perspective of InSpec. I’m going to talk about the different ways in which we can run InSpec and the different places in which to store it. ![](https://ik.imagekit.io/hedgeops/site/article_images/2016-06-09-inspec-basics-6/whereandhow.png) ## Running and Storing InSpec Locally Of course, we start locally, right? We’ve [done this already](/posts/inspec-basics-1). You’re simply saving the commands to a directory on your local machine and then running them from the command line. This is obviously just for testing in development. In [film terms](/posts/introduction), I think of this as pre-production, but I guess I need to get used to calling it by its proper name. This is for when we’re in the process of [creating our profile](/posts/inspec-basics-5) and seeing if it works. And while we’re doing that, we’re also [testing like mad](/posts/red-green-refactor) to insure speedy success and to keep things nice and neat. ## Running InSpec Profiles Through Test Kitchen I had a lot of fun learning how to do this workflow this week (which is why I was studying up a lot). This is only for testing in development, too. When we run our profiles in Kitchen, we can test against cookbook development and remediate failures through the cookbook. We can use profiles stored just about anywhere for this: - locally - [Github](https://github.com/) - [Chef Supermarket](https://supermarket.chef.io) - [Chef Compliance](/posts/tour-of-chef-compliance) (you’ll need to log in first and use an API token) Your .kitchen.yml might look a little something like this (pick your `inspec-tests` verifier, of course): ```yaml --- driver: name: vagrant provisioner: name: chef_zero verifier: name: inspec platforms: - name: centos-6.7 suites: - name: default run_list: - recipe[inspec-workshop-cookbook::default] verifier: inspec_tests: - /Path/to/local/folder - https://github.com// - supermarket:/// - compliance://base/ssh ``` ## Scanning a Node in Chef Compliance So [we’ve done this](/posts/tour-of-chef-compliance), and it was so easy and fun. And this is for use in all stages of the development life cycle. And I’m a little embarrassed because I thought it might be complicated to upload your profile to Chef Compliance, but this is literally as complicated as it gets: ![](https://ik.imagekit.io/hedgeops/site/article_images/2016-06-09-inspec-basics-6/upload.png) Just zip it up and upload it. You can also upload it from the command line using the `inspec compliance upload` command after you authenticate/log in with the `inspec compliance login` command. When you’re scanning on Chef Compliance, you can only use profiles that are stored on the Compliance server, not on GitHub or the Chef Supermarket. But I hear rumblings of the ability to store it on Chef Supermarket for use in Chef Compliance in the near future. ## Running InSpec in an [audit cookbook](https://github.com/chef-cookbooks/audit) You might not be able to scan on Chef Compliance. Perhaps you don’t want to store credentials on the Chef Compliance server. And you may not want the Chef Compliance server to see the nodes you’re scanning for security purposes. In that case, you’ll want to use this [audit cookbook](https://github.com/chef-cookbooks/audit) if you’ve decided that you can’t let the Chef Compliance server scan your machines. This cookbook will run your InSpec profiles as a part of your `chef-client` run by pulling your profiles off of wherever you’re storing them - the Supermarket, Compliance Server, GitHub, etc. While the results of the scan will go to the Compliance server and supply the data for all those pretty charts, the server will never have scanned your machine. This, too, is for use in all stages of the development life cycle and has the flexibility to have profiles stored in: - [Github](https://github.com/) - [Chef Supermarket](https://supermarket.chef.io) - [Chef Compliance / Automate](/posts/tour-of-chef-compliance) ## Concluding Thoughts After learning InSpec at a very basic level, I was pleased with how approachable and easy to grasp it was. And the more I’ve worked with it, I’ve come to find InSpec quite versatile. It’s been a great study tool for me because I was able to start out so simply and build on that knowledge. I think that’s the key to learning any new skill, really - start with small, manageable chunks and work your way up. Try not to get discouraged by what you don’t know, and focus on what you do know. Go to Day 7: [How to Inherit a Profile from Chef Compliance Server](/posts/inspec-basics-7) --- # How I Learned Red, Green, Refactor from Airbnb URL: https://hedge-ops.com/posts/red-green-refactor/ Discover how I learned the Red, Green, Refactor method from my Airbnb hosting experience. Learn how I improved my hosting skills and translated it into my IT workflow. ## Red Two summers ago I had booked a place on Airbnb for a vacation, and in return Airbnb had asked if I wanted to rent out my place. I’ll try anything once, so I did it. I figured that we could just book it once and use that money to go on a vacation. (And we did! We totally broke even on the deal, including gas, food, and entertainment.) The one hiccup in the plan was that we couldn’t check the guests in personally because my husband wanted to get on the road early to go on our vacation. So we were driving the 10 hours to Taos, NM, and as with any long road-trip, you know that you experience several drops in phone signal, sometimes for quite a long time. I had lost my signal for probably an hour as we drove through the eastern portion New Mexico, and suddenly I get about 10 texts and voicemails all at once. > “Hello Mrs. Hedgpeth, we’re getting an alarm from your home. Since we can’t reach you, we will dispatch police.” then, > “Hi Annie, this is Emily, your Airbnb guest. We got in and the alarm was on, but you didn’t give us the code. Can you > call me ASAP!” followed by, > “Annie, it’s Courtney. Your guests set off your alarm. The police are here. What’s your code? They can’t turn it off.” and then, > “Hi Annie, it’s David from across the street. There are cops going into your house. I’ll keep you updated.” I. Lost. It. It was a good thing I wasn’t driving. I was so mad at myself. I _thought_ I had thought of everything. Now I had to spend the rest of my day cleaning up the mess I had created. I vowed to never rent my house out again. I told myself that I was just not cut out for it, that there’s just too much to think of and too much that can go wrong. I was totally right about there being too much to think of. And there’s a _lot_ that can go wrong. Unfortunately, I couldn’t quit because I had already booked the house out two more times after that, and we had already planned on using that money for two other trips. I had no choice but to suck it up and figure it out. It was time to start making some lists. First, I made a list of everything that had gone wrong in that little scenario, and I made a plan to remediate it for next time. Then I made a list of everything that I had done in order to prepare, even down to the most minuscule of details. ![AirBnb](/article_images/2016-06-07-red-green-refactor/airbnb.png) ## Green Now that I had the safety of my lists and processes, I was ready for our next booking. I was still a little afraid that I’d forget something, but my mindset had changed. Sure it would have been a big bummer if something else had gone wrong, but it would just serve to give me more data that I could use to further improve upon my processes. I’m happy to say that the next booking went off without a hitch. _And_, in a very short time I had the coveted title of [ _Superhost_](https://www.airbnb.com/superhost). And I even exceeded the requirements! ![AirBnb success metrics](/article_images/2016-06-07-red-green-refactor/superhost.png) - You must host 10 trips within the last year. I did _25_. - At least 80% of your reviews need to be five stars. Mine was _100%_! - Superhosts maintain a 90% response rate or higher by responding to guests quickly. Mine was _100%!_ - Superhosts don’t cancel confirmed reservations unless there are extenuating circumstances. I _never_ canceled. ## Refactor With each booking (I went on to do it for another year after that) I improved upon my system. There were little things that I had to add to the list here and there that only served to make the experience better for all involved. I even wrote a 40-page house manual with every detail you could ever hope to know about my house, from how to work the washer to where the nearest hospital is. ## Workflow So how does that translate into technology for me? Most of you devopsy, unit tester types already know all of this, but for me, it took a really frustrating two days at the coffee shop to be reminded of the importance of a good workflow. ![Coffee Shop](/article_images/2016-06-07-red-green-refactor/coffee-shop.png) We have been without internet at the house for a _week_ (thanks for nothing, Frontier FiOS), so I’ve had to spend many long hours at the coffee shop like a vagrant. Working without big monitors in a public place is enough of a challenge, but then when you’re scattered in your mind without a proper workflow, then you just totally set yourself up for wasting a whole lotta time. _You start on something…find an issue…it reminds you of another issue, so you go to it…you forget about the first thing…you can’t figure out the second thing, so you go back to the first thing…you don’t remember it, so you have to retest…someone you know walks in, so you say hi…you think you know how to solve that second problem now…your favorite song comes on and your mind wanders…and so on._ It’s a frustrating, jumbled mess! But knowing is half the battle, so as I’m trying to write my control tests and remediate through kitchen, I have a simple plan that will keep me on the straight and narrow: _Red_—write a control and make it fail - Run `kitchen converge` to make sure my machine is in the latest state - Run `kitchen verify` to run InSpec on the latest state of said machine - Write a control for the current test I’m running - Run `kitchen verify` again to see if it failed _Green_—fix the control with Chef - Remediate my failure through a resource in my cookbook in kitchen - Run `kitchen converge` to fix the problem - Run `kitchen verify` again to see if it fixed it _Refactor_—make sure I have a good solution going forward - Clean up—is there a better way - Check In—a little at a time because when it breaks it sucks, and it’s hard to figure out where it broke ## Concluding Thoughts When things are all in a jumble, and I’m confused and frustrated and mad, it’s easy to tell myself some pretty self-defeating junk. But focus is so simple and so powerful. And in the past when I’ve really focused on things, like Airbnb, I’ve had great results. I know that the same will be true for my IT pursuits, and I’m excited to see what happens. --- # Premature Optimization URL: https://hedge-ops.com/posts/premature-optimization/ Explore the pitfalls of premature optimization in organizational planning. Learn how focusing on immediate improvement opportunities can lead to transformative change. I was talking to a colleague the other day who is working on a centralized initiative that has the potential to do a lot for our organization. He’s excited. He’s going to meetings, [getting alignment](/posts/alignment), [getting funding](/posts/funding). In it all, leaders are asking him for a [grand vision](/posts/the-grand-vision) that will bring all the disparate parts together into a coherent whole. He delivered that grand vision in the form of a plan that would bring a set of solutions together to satisfy what all stakeholders are asking for. I’ve been in that situation myself in the past. It’s all very exciting. Every meeting you have has a sense of purpose and direction. You are _finally_ bringing this change to the organization that it so desperately needs. Unfortunately, in the past, I’ve missed the reality that the only thing that is known is the next one or two things that need to be done to improve the current situation. The grand vision might be needed to bring the needed alignment and funding into the situation. But if that vision removes me from the stark reality that if I don’t act upon the improvement opportunities that stand before me _right_ now with a high level of urgency, I will not end up making the transformative change that I am promising to everyone. I might still deliver a tool. I might even declare _Mission Accomplished_ as I do. But without a flow of improvements that have the regular engagement of all stakeholders, the tool or initiative is destined to have little effect. People say it’s bad to prematurely optimize code. It’s just as bad to prematurely optimize solutions. Make your solutions fit the problem you’re facing today, and give enough vision to provide the direction needed to keep it going in the right direction. --- # MomOps URL: https://hedge-ops.com/posts/momops/ Explore the journey of a mom diving into the world of software development. From learning Linux basics to tackling InSpec, this blog post shares the highs and lows of breaking stereotypes in the tech industry. I’m taking a short detour from [InSpec-land](/posts/inspec-basics-1) today, but don’t fret; I’ll be back soon. Whenever I hit a road block in my InSpec studies, I’ve realized that it’s because I’ve covered a fair amount of ground in InSpec, but I haven’t spent enough time learning Linux. So I’m going to get a better foundation with Linux basics so that I can come back to InSpec with greater flexibility and understanding. And now for a bunny trail… So the other day I was chatting with the dad of my son’s friend. He’s a nice enough guy, and he’s in technology, so my husband [Michael](/about/michael) was like, “Oh cool, I’m a developer, and Annie’s getting into software, too.” And the guy looks at me and goes, “So are you in software or software services?” To be honest, I didn’t even know what that meant, so I looked over to Michael, and he explained, “He basically means, ‘do you develop software or sell it?’” I proceeded to explain what I’ve been doing the past month, starting from scratch, learning Linux bash shell, learning InSpec, etc. What I should have said was, “Why didn’t you ask Michael that?” But we all know why he didn’t ask Michael that, don’t we? Michael _looks_ like he’s in technology. He’s a white male. I, on the other hand, [don’t fit the mold](/posts/introduction) quite as well. Turns out that the guy writes security software. Uh, hello? We totally could have had a fun conversation about that since I’ve spent the past month doing nothing but studying InSpec and [Chef Compliance](https://www.chef.io/compliance/). It was annoying, and it was also a wake-up call to me that I have to bring it. I’m used to Michael who is an amazingly patient and empowering teacher. He wants to see women and minorities succeed, and he goes out of his way to support them. I realize, however, that I’m not entering into a career full of people like Michael. I will need to bring my A-game if I expect people to take me seriously. This totally fuels my passion to learn more, though, so I’m taking the bad with the good. On the same topic, I was on an interview a while back for an account management job, and the guy interviewing me was probably in his early 30s and single. And he was saying how he has no idea how he’d be able to do his job if he had kids. I knew right then that I wasn’t getting the job (nor did I want it). I realized, however, that he must hold to some myth that women become _less_ productive after they have children. This, of course, assumes that every woman is the same. Following that logic, of course, you’d have to assume that every man is the same, too. That sounds like a fun belief system to hold to, doesn’t it? Do know how much time I pissed away before I had kids? It was a lot. Like a _lot_. Do you know how much I piss away now? Not much, y’all. Every minute is a gift, and I try not to waste _any_ of them. So I’ll do my part to bring down that stereotype, and you can [do your part](http://apresgroup.com/for-employers/) to not believe it. Cool? Cool. --- # InSpec Tutorial: Day 5 - Creating a Profile URL: https://hedge-ops.com/posts/inspec-basics-5/ Master the art of creating a profile in InSpec with our comprehensive tutorial. Learn from the basics to advanced techniques, and take your InSpec skills to the next level. Start now! So in the last four posts we learned how to write InSpec controls. It was supposed to get you started, and then you could continue as far into the workshop as you wished. - Day 1: [Hello World](/posts/inspec-basics-1) - Day 2: [Command Resource](/posts/inspec-basics-2) - Day 3: [File Resource](/posts/inspec-basics-3) - Day 4: [Custom Matchers](/posts/inspec-basics-4) Full disclosure, I haven’t finished the workshop, but I’m chipping away at it. I’ve gotten enough done, though, that I wanted to see if I could create a profile out of it, just because I was eager to give it a go. Let’s say your company needs a whole profile of controls that are not offered by [Chef Compliance](https://www.chef.io/compliance/), and you need to run them on various machines. First you would make all of those controls (or pay me to do it for you). But now how are we going to let other people use it? You’re dying to know, I know. Y’all… Today’s an exciting day. The very hard-working [Christoph Hartmann](https://twitter.com/chri_hartmann) was kind enough to meet with me and teach me how to build a profile out of all of my [InSpec](https://www.chef.io/inspec/) controls! I gotta say…it’s so easy. Like so easy that he described it and I understood it without asking questions, and I wasn’t bluffing, either. ## Ingredients We don’t need much to get this one done! - your text editor - your command line ## How to do it 1. [Connect to GitHub](/posts/inspec-basics-5#connect-to-github) 2. [Run the profile command](/posts/inspec-basics-5#run-the-profile-command) 3. [Clean up our folders](/posts/inspec-basics-5#clean-up-our-folders) 4. [Edit .yml file](/posts/inspec-basics-5#edit-yml-file) 5. [Check your profile](/posts/inspec-basics-5#check-your-profile) 6. [Push it to git](/posts/inspec-basics-5#push-it-to-git) 7. [Run it](/posts/inspec-basics-5#run-it) ### 1. Connect to GitHub Do you remember at the end of my posts when I said that you should really share this on [GitHub](http://www.github.com)? Well, I really hope you did, because you’ll need to have a git repository connected to GitHub for the magic to happen. Here’s [mine](https://github.com/anniehedgpeth/inspec-workshop.git) if you want to fork it. Once you have the repository cloned to your machine, you’ll need to navigate to the parent directory of your workshop. ### 2. Run the profile command When you’re in the folder that encloses your workshop, run this, and it will create those files I told you about. ```shell inspec init profile inspec-workshop --overwrite ``` ![](https://ik.imagekit.io/hedgeops/site/article_images/2016-05-25-inspec-basics-5/01-init-profile.png) ### 3. Clean up our folders Go back to your text editor, and take a look at what you just did. ![](https://ik.imagekit.io/hedgeops/site/article_images/2016-05-25-inspec-basics-5/02-controls.png) Your old folder is in there; mine’s called `test`. And there’s a `.yml`, `libraries` folder, and a `controls` folder. Do you notice how there’s an `example.rb` file in the `controls` folder? That tells us two things: - We need to move our tests into the `controls` folder, so let’s do that now. ![](https://ik.imagekit.io/hedgeops/site/article_images/2016-05-25-inspec-basics-5/03-move-files.png) - We don’t need the \_spec on our file names anymore. Christoph told me today that we needed it for previous versions, but they’ve since done away with that requirement. So go ahead and edit those, if you wish; I did. ![](https://ik.imagekit.io/hedgeops/site/article_images/2016-05-25-inspec-basics-5/04-rename.png) - I also deleted my test folder and example.rb to clean it up. ### 4. Edit .yml file Now let’s head over to your newly created inspec.yml and add all of your information on it. ![](https://ik.imagekit.io/hedgeops/site/article_images/2016-05-25-inspec-basics-5/05-yml.png) ### 5. Check your profile Let’s go run a check to see if it’s really a valid profile now and if it has any errors or warnings. ```shell inspec check inspec-workshop ``` The first time around I got a warning because I had a typo. ![](https://ik.imagekit.io/hedgeops/site/article_images/2016-05-25-inspec-basics-5/07-warning.png) So I corrected it, and I was good to go! ![](https://ik.imagekit.io/hedgeops/site/article_images/2016-05-25-inspec-basics-5/08-no-warning.png) ### 6. Push it to git Let’s now push it to GitHub. ### 7. Run it We don’t even know if it really works yet, right? Well, go to your browser, navigate to your repo, and copy the url to your clipboard. Now we’re going to run `inspec exec` straight from our git repo now instead of running it locally off of the file! ![](https://ik.imagekit.io/hedgeops/site/article_images/2016-05-25-inspec-basics-5/06-git.png) ```shell inspec exec https://github/YOURNAME/inspec-workshop -t ssh://USERNAME@IPADDRESS --password 'PASSWORD' --sudo-password=PASSWORD --sudo ``` How cool is that? You’re done! It is seriously that simple. ## Concluding Thoughts After all of that, you’ll surely want to upload it to [Chef Compliance](https://www.chef.io/compliance/) for simplicity’s sake, and guess what? Next time we’re doing it! I would have been able to tell you how to do it today, because it’s another really simple process, but when Christoph was explaining it to me my kid kept coming in the room and asking me to spell things, so I was a bit distracted. I hear also that building your profile and putting it on GitHub is a way that you can use it as a test kitchen verifier, but I don’t even know what that means yet, so when I learn you’ll surely know about it. Go to Day 6: [Ways to Run It and Places to Store It](/posts/inspec-basics-6) --- # InSpec Tutorial: Day 4 - Custom Matchers URL: https://hedge-ops.com/posts/inspec-basics-4/ Master the basics of InSpec with our Day 4 tutorial. Learn how to use custom matchers, understand file resources, and construct a regex in Rubular. Perfect for beginners! Before you start today’s [InSpec](https://github.com/chef/inspec) basics tutorial, be sure to get up to date with the first three days! - Day 1: [Hello World](/posts/inspec-basics-1) - Day 2: [Command Resource](/posts/inspec-basics-2) - Day 3: [File Resource](/posts/inspec-basics-3) I was telling you about how at first I was really a little concerned with how I’d know if I was picking the correct file resource for a control. Being the newb that I am, I was overwhelmed with the choices on the [InSpec Resource](https://docs.chef.io/inspec_reference.html) page. Honestly, much of it was Greek to me. The first two controls were easy enough because to me, someone who knows little about all this, they were really intuitive. For the [first one](/posts/inspec-basics-2), I ran a command and asked it to match the output. For the [second one](/posts/inspec-basics-3) I searched inside a file for content. Easy enough. And when I got to the third one - 1.5.1, I really started learning how to search for the proper resource and matcher even if I didn’t really know what it all totally means. Let’s get started and I’ll show you what I mean. ## Ingredients Don’t forget our bazillion windows. Open these up, and make sure your CentOS vm is up and running. - [Nathen Harvey’s workshop](https://github.com/anniehedgpeth/workshops/tree/7d2bd8d98514a833a4293c0b3ff4b01bc6297b20/InSpec) - [InSpec Reference page](https://docs.chef.io/inspec_reference.html) - [Rubular](http://rubular.com/) - [Download the PDF of the CIS CentOS Linux Benchmark](https://benchmarks.cisecurity.org/tools2/linux/CIS_CentOS_Linux_6_Benchmark_v1.1.0.pdf) - your text editor - your command line ## Workflow At first, I thought I’d have to make a flowchart because the workflow would change depending upon the resource that was needed, but I’ve found that that’s not really the case. This workflow has proven to be expedient and efficient for me. 1. [Go to Harvey’s workshop and look up our control](/posts/inspec-basics-4#go-to-harveys-workshop-and-look-up-our-control) 2. [Find and read the control in the CIS PDF](/posts/inspec-basics-4#find-and-read-the-control-in-the-cis-pdf) 3. [Run the audit command on our command line](/posts/inspec-basics-4#run-the-audit-command-on-our-command-line) 4. [If audit fails, run remediation](/posts/inspec-basics-4#if-audit-fails-run-remediation) 5. [Go to the Inspec Reference page to decide on a resource and matcher to use](/posts/inspec-basics-4#go-to-the-inspec-reference-page-to-decide-on-a-resource-and-matcher-to-use) 6. [Construct a regex in Rubular](/posts/inspec-basics-4#construct-a-regex-in-rubular) 7. [Finish the control in your text editor](/posts/inspec-basics-4#finish-the-control-in-your-text-editor) 8. [Test](/posts/inspec-basics-4#test) ### 1. Go to Harvey’s workshop and look up our control So head over to [Nathen Harvey’s workshop](https://github.com/anniehedgpeth/workshops/tree/7d2bd8d98514a833a4293c0b3ff4b01bc6297b20/InSpec), and let’s do the third one this time. ![](https://ik.imagekit.io/hedgeops/site/article_images/2016-05-23-inspec-basics-4/01-Harvey.png) ### 2. Find and read the control in the CIS PDF Open the [CIS CentOS Linux 6 Benchmarks v1.1.0](https://benchmarks.cisecurity.org/tools2/linux/CIS_CentOS_Linux_6_Benchmark_v1.1.0.pdf) that you downloaded, then look for our command inside there: 1.5.1. And now let’s fill in [those first few lines](/posts/inspec-basics-2#find-and-read-the-control-in-the-cis-pdf) with the info we need from the CIS documentation. ```ruby control "cis-1-5-1" do impact 1.0 title "1.5.1 Set User/Group Owner on /etc/grub.conf (Scored)" desc "Set the owner and group of /etc/grub.conf to the root user." ``` ### 3. Run the audit command on our command line Remember that the CIS benchmark will tell us the command to run to see if we’re compliant. Let’s run that now: ```shell stat -L -c "%u %g" /etc/grub.conf | egrep "0 0" ``` So I had to read [this](http://superuser.com/questions/508881/what-is-the-difference-between-grep-pgrep-egrep-fgrep) to understand that command, but all you really need to know is that when you run it, it should come back with `0 0`. If it doesn’t, then run the remediation. ### 4. If audit fails, run remediation Should your audit fail, it looks like it’s pretty simple to fix. Just run the remediation command that the CIS gives you. ```shell chown root:root /etc/grub.conf ``` ### 5. Go to the Inspec Reference page to decide on a resource and matcher to use Okay, we’re finally to the fun part. When I first did this control I thought that it would require a [command resource](/posts/inspec-basics-2) because the remediation can be done by a command instead of editing a file. So I tried this: ```ruby control "cis-1-5-1" do impact 1.0 title "1.5.1 Set User/Group Owner on /etc/grub.conf (Scored)" desc "Set the owner and group of /etc/grub.conf to the root user." describe command('stat -L -c "%u %g" /etc/grub.conf') do its('stdout') { should match '0 0' } end end ``` And, of course, it worked. But it’s not exactly using the InSpec framework in the way it was created because it’s just using the command that CIS gives you, and it doesn’t take advantage of the simplicity of InSpec. InSpec’s strength is that it can be understood by anyone. That is the real beauty of its simplicity. I reread the CIS description that stated that it was simply looking at a file to make sure its owner and group were set to the root user. (Kind of crazy how things make a lot more sense when you actually read them a second time.) So now when I go to the [Inspec Reference](https://docs.chef.io/inspec_reference.html) page and look at the options in the right sidebar menu, I’m still drawn to the _file_ resources since we’re looking at a file. Make sense? But which one? Let’s take a look at what we have to choose from in the [_file_](https://docs.chef.io/inspec_reference.html#file) resources. [![](https://ik.imagekit.io/hedgeops/site/article_images/2016-05-23-inspec-basics-4/02-owner.png)](https://docs.chef.io/inspec_reference.html#owner) [![](https://ik.imagekit.io/hedgeops/site/article_images/2016-05-23-inspec-basics-4/03-group.png)](https://docs.chef.io/inspec_reference.html#group) Well, what do you know? Exactly what we needed. So simple! ### 6. Construct a regex in Rubular We won’t need a regex for this one, so we can continue to #7. ### 7. Finish the control in your text editor So we’re scrapping our control where we wrote out the command resource, right? Right. So now let’s fill in the proper file resource with the [owner](https://docs.chef.io/inspec_reference.html#owner) and [group](https://docs.chef.io/inspec_reference.html#group) matchers and call it good (remember, we’ll be using two today): ```ruby control "cis-1-5-1" do impact 1.0 title "1.5.1 Set User/Group Owner on /etc/grub.conf (Scored)" desc "Set the owner and group of /etc/grub.conf to the root user." describe file('/etc/grub.conf') do its('owner') { should eq 'root' } its('group') { should eq 'root'} end end ``` ### 8. Test On your command line navigate to your workshop folder. Now run: ```shell inspec exec test/1_spec.rb -t ssh://username@ipaddress --password 'PASSWORD' --sudo-password=PASSWORD --sudo ``` Hopefully your test passed. If not…back to the drawing board for you. ## Concluding Thoughts InSpec keeps getting easier and easier for me the more I practice. I’ve really enjoyed getting to know it better. On a broader level, it’s teaching me that one doesn’t need to know the whole of the technological world to get started in technology. One just needs willingness, an open mind, and a determination to push past the frustration of the unknown. Little by little, you add more things to the _known_ pile, and you don’t feel so lost. I watched this video by [Kathy Sierra](https://www.youtube.com/watch?v=FKTxC9pl-WM) about how much one needs to know, how to retain it, and how to move forward. It was really so encouraging to me, and I want to give her a huge shoutout because it really spoke to me. As always, if you’d like to look at my [GitHub repository](https://github.com/anniehedgpeth/inspec-workshop.git), feel free! I’m adding a few controls little by little. I’d love your feedback, so hit me up on [Twitter](https://twitter.com/anniehedgie)! Go to Day 5: [Creating a Profile](/posts/inspec-basics-5) --- # InSpec Tutorial: Day 3 - File Resource URL: https://hedge-ops.com/posts/inspec-basics-3/ Learn how to use the file resource in InSpec in this tutorial. Understand how to write a control that looks for specific text within a file. Perfect for beginners in InSpec. Welcome back! If you’re just now joining me, then you’ll want to take a look at the first two days in this little Inspec journey. - Day 1: [Hello World](/posts/inspec-basics-1) - Day 2: [Command Resource](/posts/inspec-basics-2) In [Day 2](/posts/inspec-basics-2) I told you that the two resources that you’ll use most with Inspec are [_command_](https://docs.chef.io/inspec_reference.html#command) and [file](https://docs.chef.io/inspec_reference.html#file). The _command resource_ basically reads the output of the command that you give it, and you pass or fail based on that output. And the _file resource_ basically passes or fails based on what the control says the different aspects of that file should or shouldn’t be. So far I’ve found the simplest _file resource_ to be the [_content matcher_](https://docs.chef.io/inspec_reference.html#content). Today we’re going to do just that. You’re going to write a control that looks for specific text within a file. Easy but mighty. So do you remember our workflow and windows we need open? ## Ingredients Open these up, and make sure your CentOS vm is up and running. - [Nathen Harvey’s workshop](https://github.com/anniehedgpeth/workshops/tree/7d2bd8d98514a833a4293c0b3ff4b01bc6297b20/InSpec) - [InSpec Reference page](https://docs.chef.io/inspec_reference.html) - [Rubular](http://rubular.com/) - [Download the PDF of the CIS CentOS Linux Benchmark](https://benchmarks.cisecurity.org/tools2/linux/CIS_CentOS_Linux_6_Benchmark_v1.1.0.pdf) - your text editor - your command line ## Workflow Even though we are dealing with a file resource this time, the workflow will still be the same. I found that when I follow this workflow exactly, it goes way faster and I make fewer mistakes. 1. [Go to Harvey’s workshop and look up our control](/posts/inspec-basics-3#go-to-harveys-workshop-and-look-up-our-control) 2. [Find and read the control in the CIS PDF](/posts/inspec-basics-3#find-and-read-the-control-in-the-cis-pdf) 3. [Run the audit command on our command line](/posts/inspec-basics-3#run-the-audit-command-on-our-command-line) 4. [If audit fails, run remediation](/posts/inspec-basics-3#if-audit-fails-run-remediation) 5. [Go to the Inspec Reference page to decide on a resource and matcher to use](/posts/inspec-basics-3#go-to-the-inspec-reference-page-to-decide-on-a-resource-and-matcher-to-use) 6. [Construct a regex in Rubular](/posts/inspec-basics-3#construct-a-regex-in-rubular) 7. [Finish the control in your text editor](/posts/inspec-basics-3#finish-the-control-in-your-text-editor) 8. [Test](/posts/inspec-basics-3#test) ### 1. Go to Harvey’s workshop and look up our control So head over to [Nathen Harvey’s workshop](https://github.com/anniehedgpeth/workshops/tree/7d2bd8d98514a833a4293c0b3ff4b01bc6297b20/InSpec), and let’s do the second one since we did the first one last time. ![](https://ik.imagekit.io/hedgeops/site/article_images/2016-05-20-inspec-basics-3/01-nathen-harvey.png) ### 2. Find and read the control in the CIS PDF Open the [CIS CentOS Linux 6 Benchmarks v1.1.0](https://benchmarks.cisecurity.org/tools2/linux/CIS_CentOS_Linux_6_Benchmark_v1.1.0.pdf) that you downloaded, then look for our command inside there: 1.2.2. [Remember](/posts/inspec-basics-2#find-and-read-the-control-in-the-cis-pdf) how we need those bits of info to fill in our control? ```ruby control "cis-1-2-2" do impact 1.0 title "1.2.2 Verify that gpgcheck is Globally Activated (Scored)" desc "The gpgcheck option, found in the main section of the /etc/yum.conf file determines if an RPM package's signature is always checked prior to its installation." ``` You’re a pro already. Moving right along… ### 3. Run the audit command on our command line So the CIS benchmark will tell us the command to run to see if we’re compliant. Let’s run that now: ```shell grep gpgcheck /etc/yum.conf gpgcheck=1 ``` That tells us that `gpgcheck` should equal `1`, right? So what if it doesn’t? ### 4. If audit fails, run remediation We’ll need to edit the file if the audit failed, so let’s do that by ssh from the command line. Once you’re in `sudo nano /etc/yum.conf` Then add the text, write out `Ctrl+O`, and exit `Ctrl+X`. Then run the command again to make sure you fixed the problem. ### 5. Go to the Inspec Reference page to decide on a resource and matcher to use I told you already that `command` and `file` are the most common resources, and I told you already that we’re going to be doing a file resource today. But how do we know that? Well, simple. The CIS audit wants us to look in a `file`. Let’s head over to the [Inspec Reference](https://docs.chef.io/inspc/reference.html) page and look at the options in the right sidebar menu. [![](https://ik.imagekit.io/hedgeops/site/article_images/2016-05-20-inspec-basics-3/02-inspec-resource.png)](https://docs.chef.io/inspec_reference.html#id43) We want to make sure that content exists within a file right? So we’re going to see if we get a `match` for the `content` when we look inside that `file`. [![](https://ik.imagekit.io/hedgeops/site/article_images/2016-05-20-inspec-basics-3/03-content.png)](https://docs.chef.io/inspec_reference.html#id43) ### 6. Construct a regex in Rubular The audit already gave us a regex, so we don’t need to create one. On to #7. _Update_: I’ve since changed my tune on this a bit. Please see this post on [Regular Expressions](/posts/inspec-basics-8) so that you can take care to use the best possible regex when necessary. ### 7. Finish the control in your text editor Okay, so you already had the first four lines, and now let’s fill in the rest with the file resource: ```ruby control "cis-1-2-2" do impact 1.0 title "1.2.2 Verify that gpgcheck is Globally Activated (Scored)" desc "The gpgcheck option, found in the main section of the /etc/yum.conf file determines if an RPM package's signature is always checked prior to its installation." describe file('/etc/yum.conf') do its('content') { should match /gpgcheck=1/ } end end ``` ### 8. Test On your command line navigate to your workshop folder. Now run: ```shell inspec exec test/1_spec.rb -t ssh://username@ipaddress --password 'password' ``` Again, for the command line newbs like me, `test` is the folder you’ve put your file in. `username`, `ipaddress`, and `password` are for your CentOS vm. (I hope you got that, because I’ll assume you know it next time!) Hopefully your test passed. If not…back to the drawing board for you. Now, if you plan to do more, then you’ll hit a snag when you get to 1.5.3, so let me help you out now to save you some frustration. When you’re running your test, use this instead: ```shell inspec exec test/1_spec.rb -t ssh://username@ipaddress --password 'PASSWORD' --sudo-password=PASSWORD --sudo ``` I ran into controls that I had to have `sudo` access to run. So after that I decided just to run `--sudo-password=PASSWORD --sudo` every time. ## Concluding Thoughts I’ve written a lot more of these controls since last time, and each time it gets easier and easier. The first few took me a little while to navigate, and then I got stumped by the sudo issue, but after I got in a groove, each one took me just a minute or two. As always, if you’d like to look at my [GitHub repository](https://github.com/anniehedgpeth/inspec-workshop.git), feel free! I’m adding a few controls little by little. I’d love your feedback, so hit me up on [Twitter](https://twitter.com/anniehedgie)! Go to Day 4: [Custom Matchers](/posts/inspec-basics-4) --- # InSpec Tutorial: Day 2 - Command Resource URL: https://hedge-ops.com/posts/inspec-basics-2/ Deepen your understanding of InSpec with our Day 2 tutorial. Learn how to create a command resource, use Nathen Harvey’s InSpec workshop, set up a CentOS 6 VM, and more. Perfect for beginners. Last week we walked through a really basic [Hello World](/posts/inspec-basics-1) InSpec tutorial, just to get our feet wet, and today in our [InSpec](https://www.chef.io/inspec/) workshop, we’ll be diving a little deeper and creating this: ```ruby control "cis-1-2-1" do impact 1.0 title "1.2.1 Verify CentOS GPG Key is Installed (Scored)" desc "CentOS cryptographically signs updates with a GPG key to verify that they are valid." describe command('rpm -q --queryformat "%{SUMMARY}\n" gpg-pubkey') do its('stdout') { should match /CentOS 6 Official Signing Key/ } end end ``` But first make sure that you go through [last week’s tutorial](/posts/inspec-basics-1) so that we can make sure you have all the proper software installed and updated. ## Nathen Harvey’s InSpec Workshop [Nathen Harvey](http://nathenharvey.com/) has a [fantastic InSpec workshop](https://github.com/anniehedgpeth/workshops/tree/7d2bd8d98514a833a4293c0b3ff4b01bc6297b20/InSpec) that I’m going through right now, and he talks about it on [Chef’s YouTube channel](https://youtu.be/dEPe-JXRjVU), too. Throughout my InSpec tutorial series, I’ll be showing you some basics for getting through his workshop successfully. Think of my tutorial as a remedial class before you take Harvey’s workshop, or some extra tutoring along the way. How about we just dive right in? ## But first…your VM Head over to [Azure](https://portal.azure.com) and get yourself a nice, shiny [CentOS 6 VM](http://www.openlogic.com/products-services/services/cloud-services/azure) and come back. It’ll need to be set up to enable non-interactive `sudo` access for the machine, so to do that, we have a bit of a [workaround](https://github.com/chef/train/issues/60) to do real quick. Go to your command line and ssh into your machine. Once you’re in, we need to edit `/etc/sudoers.d/username` (obvi use your username, right?). So you’ll need to enter ```shell sudo nano /etc/sudoers.d/username ``` Then just add this to the file, save, and exit. ```text username ALL=(root) NOPASSWD: ALL Defaults!ALL !requiretty ``` That’s it! Now we’re ready to roll. _Update_: This is just a problem with CentOS. ## Ingredients You’re going to need about a bazillion windows open for our little workflow to happen, so open up these: - [Nathen Harvey’s workshop](https://github.com/anniehedgpeth/workshops/tree/7d2bd8d98514a833a4293c0b3ff4b01bc6297b20/InSpec) - [InSpec Reference page](https://docs.chef.io/inspec_reference.html) - [Rubular](http://rubular.com/) - [Download the PDF of the CIS CentOS Linux Benchmark](https://benchmarks.cisecurity.org/tools2/linux/CIS_CentOS_Linux_6_Benchmark_v1.1.0.pdf) - your text editor - your command line ## Workflow This is what our basic workflow for the workshop is going to look like. 1. [Go to Harvey’s workshop and look up our control](/posts/inspec-basics-2#go-to-harveys-workshop-and-look-up-our-control) 2. [Find and read the control in the CIS pdf](/posts/inspec-basics-2#find-and-read-the-control-in-the-cis-pdf) 3. [Run the audit command on our command line](/posts/inspec-basics-2#run-the-audit-command-on-our-command-line) 4. [If audit fails, run remediation](/posts/inspec-basics-2#if-audit-fails-run-remediation) 5. [Go to the Inspec Reference page to decide on a resource and matcher to use](/posts/inspec-basics-2#go-to-the-inspec-reference-page-to-decide-on-a-resource-and-matcher-to-use) 6. [Construct a regex in Rubular](/posts/inspec-basics-2#construct-a-regex-in-rubular) 7. [Finish the control in your text editor](/posts/inspec-basics-2#finish-the-control-in-your-text-editor) 8. [Test](/posts/inspec-basics-2#test) ### 1. Go to Harvey’s workshop and look up our control So head over to [Nathen Harvey’s workshop](https://github.com/anniehedgpeth/workshops/tree/7d2bd8d98514a833a4293c0b3ff4b01bc6297b20/InSpec), and note the very first one on the list because that’s what we’re after. ![](https://ik.imagekit.io/hedgeops/site/article_images/2016-05-17-inspec-basics-2/04-nathen-harvey.png) ### 2. Find and read the control in the CIS PDF Open the [CIS CentOS Linux 6 Benchmarks v1.1.0](https://benchmarks.cisecurity.org/tools2/linux/CIS_CentOS_Linux_6_Benchmark_v1.1.0.pdf) that you downloaded, then look for our command inside there. Once you’ve found it, we’re going to snag some of that information for our control. Look at the first three lines again. ```ruby control "cis-1-2-1" do impact 1.0 title "1.2.1 Verify CentOS GPG Key is Installed (Scored)" desc "CentOS cryptographically signs updates with a GPG key to verify that they are valid." ``` Notice that I chose the `control` to be the CIS number. I could have been more specific, obviously, but I didn’t for simplicity’s sake. _Profile Applicability_ determines the `impact` field. And the `title` and `desc` come straight out of that word for word. _Edited to add_: Sometimes I use the _Rationale_ section to enter into the `desc` instead when it describes it in a better way. Let’s open up our text editor and create a new file. I called mine `1_spec.rb`. Then enter all of that in—the control, impact, title, and desc. ### 3. Run the audit command on our command line Let’s now run the audit command there that the CIS gives us: ```shell rpm -q --queryformat "%{SUMMARY}\n" gpg-pubkey ``` They don’t tell you what the output is supposed to be, but we can guess that our audit passed because it didn’t say it failed. So now we know that apparently that’s the output that it gives when the test is run and it passes. Score! So let’s copy and paste that text somewhere to use in a sec. ![](https://ik.imagekit.io/hedgeops/site/article_images/2016-05-17-inspec-basics-2/01-audit-command.png) ### 4. If audit fails, run remediation We don’t have to do that this time since we passed, so we’ll hold off on this step until another time when we need it. ### 5. Go to the Inspec Reference page to decide on a resource and matcher to use So there are a whole bunch of different audit resources to use for creating tests (and this [InSpec reference page](https://docs.chef.io/inspec_reference.html) has all of them). An _audit resource_ is basically the suggested tool to use in order to code your control. The two heavy hitting resources are going to be _file_ and _command_. We used the _file resource_ with the _content matcher_ [last week](/posts/inspec-basics-1) when we were searching for text within a file. So let’s take a look at the reference page and decide which to use now. In the menu on the right, you’ll see every possible resource. So what do we know about our test? When we run the audit command it gives us a standard output, right? So let’s find _command_ and - what do you know - it has a _stdout_ option under _Matchers_. A _matcher_ is just like it sounds - we want to match what’s in our test with what the stdout is. ![](https://ik.imagekit.io/hedgeops/site/article_images/2016-05-17-inspec-basics-2/03-inspec-resources.png) (Full disclosure, I’m oversimplifying the process just a little bit, but I’m only doing that for the sake of this being the first one in the workshop. I’ll explain as we dive into it more and more how to choose which audit resources to use. (I see a flowchart in our near future, perhaps.)) Alright, so when we click on _stdout_ from the menu on the right, it shows us this test to use when we need to match a standard output. [![](https://ik.imagekit.io/hedgeops/site/article_images/2016-05-17-inspec-basics-2/05-stdout.png)](https://docs.chef.io/inspec_reference.html#id31) Let’s copy that and enter it into our text editor after the four lines we added earlier. Now let’s change the `describe command` to have the audit test from the CIS benchmarks. In the next step we’ll change the `should match` matcher, so hold off for now. ```ruby describe command('rpm -q --queryformat "%{SUMMARY}\n" gpg-pubkey') do its('stdout') { should match (/[0-9]/) } end ``` ### 6. Construct a regex in Rubular Remember that standard output that we copied and pasted earlier in step 3? Grab that and head over to your browser that has [Rubular](http://rubular.com/) opened. Now paste that standard output into _Your test string_. Now we’re going to pick out a shortened, regular expression out of that mess, enter it into _Your regular expression_, and if your _Match result_ has it highlighted, then it’s happy, and you’re safe to use just that shortened _regex_ in your command. So copy that regex and get ready to paste it. [![](https://ik.imagekit.io/hedgeops/site/article_images/2016-05-17-inspec-basics-2/02-rubular.png)](http://rubular.com/r/7969HPaj2n) ### 7. Finish the control in your text editor Now we’re ready to add our matcher to complete our control! So paste your regex into the `should match`, add another `end` at the bottom, and it should all like this: ```ruby control "cis-1-2-1" do impact 1.0 title "1.2.1 Verify CentOS GPG Key is Installed (Scored)" desc "CentOS cryptographically signs updates with a GPG key to verify that they are valid." describe command('rpm -q --queryformat "%{SUMMARY}\n" gpg-pubkey') do its('stdout') { should match /CentOS 6 Official Signing Key/} end end ``` ### 8. Test On your command line navigate to your workshop folder. Now run: ```shell inspec exec test/1_spec.rb -t ssh://username@ipaddress --password 'password' ``` For the command line newbs like me, `test` is the folder you’ve put your file in. `username`, `ipaddress`, and `password` are for your CentOS vm. Hopefully your test passed. If not…back to the drawing board for you. ## Concluding Thoughts This is still a very nebulous process for me. I’m not quite sure how I’m ever going to know enough to be able to choose the right audit resources, and that gives me a little anxiety. I’m hoping that it dissipates the more I progress through Harvey’s workshop, though. It would be great if you tracked and shared your work in a [git repository](https://github.com)! Here’s [mine](https://github.com/anniehedgpeth/inspec-workshop.git). Anyone have any tricks of the trade for me? Go to Day 3: [File Resource](/posts/inspec-basics-3) --- # InSpec Tutorial: Day 1 - Hello World URL: https://hedge-ops.com/posts/inspec-basics-1/ Start your journey with Chef Compliance and InSpec framework through this beginner-friendly tutorial. Learn how to set up and run a simple ‘Hello World’ test, and gain a deeper understanding of Compliance. No prior coding experience required. I’ve been sharing what I’ve learned about [Chef Compliance](/posts/setting-up-compliance), and because it uses the [InSpec framework](https://www.chef.io/compliance/), I want to start a little series on [InSpec](https://www.chef.io/inspec/) to gain a fuller understanding, appreciation for, and greater flexibility with [Compliance](https://www.chef.io/compliance/). It’s possible that you’re part of a company, perhaps without a dedicated security team, that uses Chef Compliance from within [Chef Automate](https://www.chef.io/automate/). And it’s possible that you’re totally content to run scans off of the premade [CIS profiles](https://benchmarks.cisecurity.org/) and call it a day. That’s a huge selling point of Compliance. It couldn’t be easier! In reality, however, the built-in Compliance profiles will get you to 80% of what you need, and then you’ll want to add or modify a bunch of other specific tests from the profiles to meet the other 20% of your needs. By the end of this series, I’ll know how to do it (because I’m learning as I go), and you will, too! Today, we’re going to run through a really simple set up and run a _Hello World_ test, just to get our feet wet. And don’t forget, InSpec was written with non-developer-types in mind! That’s actually the thing that attracted me [getting started in my tech journey](/posts/introduction) with InSpec and Compliance - because it’s totally approachable and the authors _want_ it to be approachable. By the end of this series, I suppose, I will have tested their intentions one way or another. And for full disclosure, Chef is not paying me for these posts, so you’re getting a truly unbiased opinion. [My husband’s](http://hedge-ops.com) [company](http://www.ncr.com) is a Chef customer which is what gave me the idea to delve into Compliance as a starting point. ## Installation Okay, enough about me, let’s open up some terminals and get started. If you already have the updated versions of Homebrew, Ruby, and InSpec, then skip ahead! _Update_: If you have the current ChefDK installed, skip down to [Hello World Tutorial](/posts/setting-up-compliance#hello-world-tutorial). Also, there are a few other installation options listed [here](https://github.com/chef/inspec#installation). ### Install Homebrew Before I could install InSpec, I needed to have the latest version of Ruby installed. And before I could install the latest version of Ruby, I had to install [Homebrew](http://brew.sh/), the OS X package manager. ```shell /usr/bin/ruby -e "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/master/install)" ``` ### Update Ruby Here’s what I ran for the Ruby update: ```shell brew install rbenv ruby-build # Add rbenv to bash so that it loads every time you open a terminal echo 'if which rbenv > /dev/null; then eval "$(rbenv init -)"; fi' >> ~/.bash_profile source ~/.bash_profile rbenv install 2.3.0 rbenv global 2.3.0 ``` Close terminal and reopen. ```shell ruby -v ``` So now do you have the latest version of Ruby 2.3.0?? It’ll say after you run that last command. ### Installing InSpec Now we’re on to the good stuff. Let’s install InSpec: _Update_: It’s preferable to use the InSpec that comes with the ChefDK, but if you’re not using ChefDK otherwise, feel free to use the standalone version of InSpec. They do update it more often. Again, here are the other [installation options](https://github.com/chef/inspec#installation). ```shell gem install inspec ``` Just to be sure everything went according to plan, run `inspec`, and you should see something that looks like a command menu. So now we’re all updated, and we’re ready to get started. ## Hello World Tutorial First, we’re going to create a file with some text in it. Then we’re going to make a test to look for _other_ text in the file, setting ourselves up for failure. Then we’ll add the correct text so that we can redeem ourselves. So here we go… ### Create a file to test - Create a folder and open it in your text editor. (I’m using Visual Studio Code.) - In that folder, create a file called `hello.txt`. - In that file, type the text `Goodnight Moon`. (Don’t forget to save - gets me every time.) ![](https://ik.imagekit.io/hedgeops/site/article_images/2016-05-13-inspec-basics-1/01-text-file.png) ### Create the test - Create a file in that same folder called `hello_spec.rb`. - In that file we’re going to create a _control_ with a [_file resource_](https://docs.chef.io/inspec_reference.html#file) having a [_content matcher_](https://docs.chef.io/inspec_reference.html#id42) ‘Hello World!’ in it. In other words, this file is going to check and see if the other file has any text in it that _matches_ ‘Hello World’. ```ruby control "world-1.0" do # A unique ID for this control impact 1.0 # Just how critical is title "Hello World" # Readable by a human desc "Text should include the words 'hello world'." # Optional description describe file('hello.txt') do # The actual test its('content') { should match 'Hello World' } # You could just do the "describe file" block if you want. The end # rest is just metadata, but it's a good habit to get into. end ``` ### The failed test - Now go to that folder in your terminal, and let’s run the command. ```shell inspec exec hello_spec.rb ``` ![](https://ik.imagekit.io/hedgeops/site/article_images/2016-05-13-inspec-basics-1/02-failed.png) - Yay! We failed! ### Make up test Okay, so you probably don’t like failure any more than I do, so let’s edit that text file so that we pass. - Add the text `Hello World!` to the `hello.txt` file. ![](https://ik.imagekit.io/hedgeops/site/article_images/2016-05-13-inspec-basics-1/03-hello-world.png) - Now let’s go back to our terminal and rerun `inspec exec hello_spec.rb` and see what happens. ![](https://ik.imagekit.io/hedgeops/site/article_images/2016-05-13-inspec-basics-1/04-passed.png) - We passed! Yay! ## Concluding Thoughts Doing this little exercise helped me to get my mind around what Chef Compliance does at a very basic level. I really like how clear and concise the framework is. In later posts, however, I’ll tell you where I’ve been tripped up and how I got around that. At its core, however, I think that it’s within my grasp; I just need to study up on some more basics. Go to Day 2: [Command Resource](/posts/inspec-basics-2) --- # Tour of Chef Compliance URL: https://hedge-ops.com/posts/tour-of-chef-compliance/ Explore the basics of Chef Compliance in this easy-to-follow guide. Learn how to add a node, scan your server, fix compliance failures, and more. Get started with Chef Compliance today! Last week I showed you [how to get set up](/posts/setting-up-compliance) to use [Chef Compliance](https://www.chef.io/compliance/), so now that you’re ready, let’s take a look at just what this tool can do for us. Today we’re going to take a very basic tour of Chef Compliance—easy-breezy—just to get the feel of it. So what we’re going to do is 1) use Chef Compliance to scan the Chef Compliance server that we just made, because why not? That needs to be clean, too, right? 2) We’ll take one of the failures that it gives us, 3) go in and fix it manually, and then 4) rescan to make sure it was remediated. ## Add a node After you log in, you’ll be at a screen that looks like this. Click on _Add Node_ ![](https://ik.imagekit.io/hedgeops/site/article_images/2016-05-09-tour-of-chef-compliance/01-add-node.png) Then you’ll need to fill all of this out. ![](https://ik.imagekit.io/hedgeops/site/article_images/2016-05-09-tour-of-chef-compliance/02-add-server.png) - _Enter nodes (IPs or hostnames):_ Add yours; mine was `amh.southcentralus.cloudapp.azure.com` - _Add to environment:_ just pick any category of machine (i.e. test, production, development) - _Access:_ ssh - _Username:_ _enter yours_ - _Password:_ _enter yours_ - Then click _Add 1 node_ ## Scan it Now _check_ your newly added server, and click _Scan_. ![](https://ik.imagekit.io/hedgeops/site/article_images/2016-05-09-tour-of-chef-compliance/03-scan.png) We need to tell it which profile we need to scan it against, so let’s choose: _cis/cis-ubuntu14.04lts-level1_. Then click _Scan Now_ and wait for the magic to happen. ![](https://ik.imagekit.io/hedgeops/site/article_images/2016-05-09-tour-of-chef-compliance/04-cis.png) After your scan is complete, your summary of compliance failures will appear. ![](https://ik.imagekit.io/hedgeops/site/article_images/2016-05-09-tour-of-chef-compliance/05-scan-report.png) ## Surprise! You have failures 52 of them to be exact. The very first one says _Set Password Expiration Days_. - Click on that (honestly, I don’t know if you _have_ to click it, but it can’t hurt). ![](https://ik.imagekit.io/hedgeops/site/article_images/2016-05-09-tour-of-chef-compliance/06-errors.png) - We need to learn about the rule that defines it as a failure, so click on _Compliance_ on the top left. - Then find the profile that you used to scan against and click on it: _ubuntu14.04lts-level1_ ![](https://ik.imagekit.io/hedgeops/site/article_images/2016-05-09-tour-of-chef-compliance/07-compliance.png) Let’s take a look at the [InSpec](https://github.com/chef/inspec) code that wrote the rule that found this failure. It’s going to tell us which folder we need to look in to find the file that needs to be edited and what it needs to be edited to. ![I had to edit this image so that you could see the text that didn’t wrap.](https://ik.imagekit.io/hedgeops/site/article_images/2016-05-09-tour-of-chef-compliance/08-error-details.png) It’s a bit small, but it says: ```ruby control "xccdf_org.cisecurity.benchmarks_rule_10.1.1_Set_Password_Expirations_Days" do title "Set Password Expiration Days" desc "The PASS_MAX_DAYS parameter in /etc/login.defs allows an administrator to force passwords to expire once they reach a defined age. It is recommended that the PASS_MAX_DAYS parameter be set to less than or equal to 90 days." impact 1.0 describe file("/etc/login.defs")do its(:content) { should match /^\s*PASS_MAX_DAYS\S+90/ } end ``` So I don’t read or write InSpec, but what I find pretty cool is that we can figure out what it says pretty easily anyway. Let’s go line by line and understand what this means. ```ruby control "xccdf_org.cisecurity.benchmarks_rule_10.1.1_Set_Password_Expirations_Days" do ``` So that’s the rule that it says our server broke, right? Right. ```ruby title "Set Password Expiration Days" ``` When we open up the file, we’re going to see a section with this as the title. ```ruby desc "The PASS_MAX_DAYS parameter in /etc/login.defs allows an administrator to force passwords to expire once they reach a defined age. It is recommended taht the PASS_MAX_DAYS parameter be set to less than or equal to 90 days." ``` This is a full description of the command so that we understand exactly what it wants from us. So now we know that our `PASS_MAX_DAYS` must be set to 90 days or less. That must mean that it’s currently set at greater than 90 days. ```ruby describe file("/etc/login.defs") do ``` This is telling us that this is the file that we need to change and that it’s in the `etc` folder. Got it! ```ruby its(:content) { should match /^\s*PASS_MAX_DAYS\S+90/ } ``` And there’s the code that’s making it all happen. So now we’re ready to go fix it manually! ## Let’s fix it Our goal is to [automate these fixes](https://www.chef.io/), right? But for now, we’re learning and experimenting, so we’re going to have some fun by fixing one of these failures manually. So let’s get ready to clean up some messes—be still my OCD little heart. So first we need to open our terminal and ssh to our vm. So type ssh then your username @ your fully qualified domain name. ```shell ssh username@fqdn ``` Now we need to go to the folder that holds the offending command, so let’s change directory to the `etc` folder. ```shell cd /etc ``` So now that we’re in that folder, we need to open up the offending file using our text editor, Nano. ```shell sudo nano login.defs ``` Let’s look for the text that we need to edit. We can search for it using `ctrl+w` to search for `Pass`. ![](https://ik.imagekit.io/hedgeops/site/article_images/2016-05-09-tour-of-chef-compliance/09-edit-file.png) And there it is! It’s currently set to 99999 days, and all we have to do is change it to 90 or less to make it compliant. ![](https://ik.imagekit.io/hedgeops/site/article_images/2016-05-09-tour-of-chef-compliance/09b-edit-file.png) When you’re finished, hit `ctrl+o` (write out) to save, then `enter`. Then `ctrl+x` to exit. ## Let’s scan again So now let’s go back to our Chef Compliance dashboard, check our server box, and scan it again. ![](https://ik.imagekit.io/hedgeops/site/article_images/2016-05-09-tour-of-chef-compliance/10-rescan.png) Now when we look at our list of failures, ![](https://ik.imagekit.io/hedgeops/site/article_images/2016-05-09-tour-of-chef-compliance/11-error-remediated.png) …the one that we worked on that said _Set Password Expiration Days_ isn’t there anymore! Woohoo! We remediated it! Feels good, doesn’t it? Only51 more to go… ## Concluding Thoughts So, true confession, after I wrote the [tutorial for setting up Chef Compliance](/posts/setting-up-compliance), I was like, ‘Uh…seriously? Is it supposed to be this hard?’ But the creators of the software—totally sweet and super smart (and tall) guys—are aware and are working on that, so yay! ![(1) No amount of filters can fix a bad hair day, and 2) that’s a really cool Chef apron that I should have had a guy wear so that I was not stereotypically wearing an apron as the only woman in the pic—dangit)](https://ik.imagekit.io/hedgeops/site/article_images/2016-05-09-tour-of-chef-compliance/dinner_at__michael_and_annie_s_home__.png) But then I started playing around with the actual program, and it is so incredibly easy to use and intuitive. I feel like it should be harder because it’s such a valuable tool in terms of how it changes the security game, making it so much safer to get to production. So now that we understand how cool [Chef Compliance](https://www.chef.io/compliance/) is, I’ll be exploring and learning more about [InSpec](https://www.chef.io/inspec/) so that I can learn how to create my own profiles to test against. I hope you’ll stay tuned! --- # Devops Days Dallas URL: https://hedge-ops.com/posts/devops-days-dallas/ Join me at DevOps Days Dallas this September! Discover how creativity and technology merge in the IT world. Network, learn, and explore sponsorship opportunities. If you’ve read [The Phoenix Project](http://www.amazon.com/Phoenix-Project-DevOps-Helping-Business/dp/0988262592) (you know, the DevOps novel), then you’ll know what I mean when I say that I’m a bit of a Patty type—[art background but having something to contribute to the IT world](/posts/introduction), just trying to figure out how. So I think the best thing to do is network as much as I can and keep doing projects to find my niche. So far so good—I’m liking it more than I thought I would, honestly. The more small, manageable experiences that I have in the IT world, the less scary and ethereal it seems to me. When I tell people in my world what I’m doing (re: learning tech stuff), they look at me like I’m crazy, [like I’m not being true to myself](https://youtu.be/BjhxLYD89X8), as if to say, “But Annie, you’re so creative. Why are you selling out?” But that’s just it, isn’t it? The world has the message that technology isn’t for creative-fuzzy-right-brained types. But it is! When I think about the flexibility and fluidity that is required of organizations employing DevOps practices, I’m convinced that it takes all types of people. So one day I was reading this blog post by Doug Ireton about [encouraging women in DevOps](http://dougireton.com/blog/2013/06/23/encouraging-women-in-dev-slash-ops/) (so good!), and his last point was that the DevOps Days conferences need more women to help run them. So being the doer that I am, I landed myself in the middle of planning for [DevOps Days Dallas](http://www.devopsdays.org/events/2016-dallas/) in September. I’m excited to get to meet so many people and learn about companies that are moving and shaking in the IT world. My jobs will be working with sponsors/vendors and throwing a very happy happy-hour, so shout out if you plan to go! And if your company wants to sponsor, all the better! Go here and [check out all the sponsorship levels](http://www.devopsdays.org/events/2016-dallas/sponsor/). Hope to see you there! --- # Tutorial for Setting Up Chef Compliance Server on Azure URL: https://hedge-ops.com/posts/setting-up-compliance/ Learn how to set up a Chef Compliance Server on Azure with our easy-to-follow tutorial. Ideal for beginners, we break down the process into simple steps. This tutorial for setting up [Chef Compliance](https://www.chef.io/compliance/) is for pretty much anyone to use. I break it into extremely simple steps, so that there is no mystery. The thing about setting up Chef Compliance that was challenging for me is that you can’t see the product until you build a home for it. It was a lot like taking a giant box home from Ikea when you don’t know what you bought, then you have to put it together with random instructions strewn together from blogs. As a non-technical type who’s been into technology for all of about five minutes, I am teaching myself to not be scared of technology. True, I’m most likely not the next Steve Jobs, but I did prove that I can now set up a virtual machine to use Chef Compliance in the cloud, and you can, too! Disclaimer: I’m not a prodigy; I just have a totally unfair advantage, and his name is Michael Hedgpeth of [hedge-ops.com](http://hedge-ops.com). I’m married to him, and thus have a totally awesome teacher with benefits. So there’s that. I will say, however, that I did not move from one step to the next without fully understanding what I was doing and the context with which I was doing it. ## What You Will Need - a [Microsoft Azure](https://portal.azure.com) account (there are free trials if needed) - knowledge of basic Ubuntu command line ( I [took a course](https://www.lynda.com/Ubuntu-tutorials/Working-command-line/159637/179585-4.html) on basic Linux command line at [lynda.com](http://www.lynda.com)) ## Overview of the steps 1. [Create an Ubuntu virtual machine on Azure](/posts/setting-up-compliance#create-an-ubuntu-virtual-machine-on-azure) 2. [Make your virtual machine accessible over the internet](/posts/setting-up-compliance#make-your-virtual-machine-accessible-over-the-internet) 3. [Rename your virtual machine](/posts/setting-up-compliance#rename-your-virtual-machine) 4. [Set up Chef Compliance on your virtual machine](/posts/setting-up-compliance#set-up-chef-compliance-on-your-virtual-machine) 5. [Configure Chef Compliance server](/posts/setting-up-compliance#configure-chef-compliance-server) ### Create an Ubuntu virtual machine on Azure We decided to use Azure because virtual Box just didn’t work for us for whatever reason, and Michael is more familiar with Azure than AWS right now. Plus, they offer a free trial, so it worked out. If you have had better luck with virtual box, I’d love to hear about it! 1. Go to your [Azure](https://portal.azure.com) account and click _NEW_. 2. Under _Marketplace_ click _Virtual Machines_ 3. Under _Featured Apps_ click _Ubuntu Server 14.04 LTS_ ![](https://ik.imagekit.io/hedgeops/site/article_images/2016-05-05-setting-up-compliance/02-ubuntu-server.png) 4. Leave the default setting for _Select a Deployment Model as Resource Manager_ 5. Under the _1 – BASICS – Configure Basic Settings_ tab, fill in the following - _Username_—This is you. You’ll have to enter it several times, so make it simple. - _Password_—Choose a good one because it’s over the internet, but you will have to enter it, and I don’t know that you can copy and paste it. - _Resource Group_—Create a new one and name it. - _Location_—Choose the location of your server that’s closest to your region. 1. Under the _2 SIZE_ tab—A1 is what I chose, cheap, and it did the job. ![](https://ik.imagekit.io/hedgeops/site/article_images/2016-05-05-setting-up-compliance/03-create-vm.png) 2. Under the _3 SETTINGS_ tab—choose all defaults for _Storage_ options. 3. Under the _4 Summary_ tab—click ok and your VM will be deployed after a few minutes. ### Make Your Virtual Machine Accessible Over the Internet We’re doing this so that our browser can access Chef Compliance on our server. First, we’ll register a public name for the server, so that we can type that name in a browser. Then we’ll need to change the security settings on the network security group. 1. So go to _All Resources_, click on your server, then click on your _IP address_ and note that there is no DNS name label for it. ![](https://ik.imagekit.io/hedgeops/site/article_images/2016-05-05-setting-up-compliance/05-changing-dns.png) 2. Click on _Configuration_ and add the name you choose in the box called _DNS name label_ and copy it to notepad or something because you’ll need it later. Then click _SAVE_ at the top of the _Configuration_ tab. ![](https://ik.imagekit.io/hedgeops/site/article_images/2016-05-05-setting-up-compliance/06-configuration.png) 3. Go to the network security group (the one with the shield icon) that you just created. We need to create a rule so that our compliance website can be accessed. ![](https://ik.imagekit.io/hedgeops/site/article_images/2016-05-05-setting-up-compliance/07-inbound-security-rules.png) - In settings, click on _Inbound Security Rules_. ![](https://ik.imagekit.io/hedgeops/site/article_images/2016-05-05-setting-up-compliance/08-inbound-security-rules.png) - Click _ADD_, and name it _allow-ssl_, and change the _Destination Port Range_ to _443_ so that you can talk to the server over https. Then click _OK_. ![](https://ik.imagekit.io/hedgeops/site/article_images/2016-05-05-setting-up-compliance/09-add-rule.png) - Make sure your machine is on by going back to _All Resources_ and clicking on your VM (with the monitor icon). If _Connect_ is greyed out, then you’re connected. ![](https://ik.imagekit.io/hedgeops/site/article_images/2016-05-05-setting-up-compliance/10-make-sure-vm-is-on.png) ### Rename Your Virtual Machine After all of that, your vm still doesn’t really know that its name was changed, so now we have to tell it what its name is. 1. SSH to your vm - Open up your terminal. ```shell ssh username@dnsname ``` Mine was: ```shell ssh annie@cheftutorialcompliance.southcentralus.cloudapp.azure.com ``` - Respond `yes` - Enter your password 1. Install Nano on your VM ```shell sudo apt-get install nano ``` 1. Open this file so that you can edit it ```shell sudo nano /etc/waagent.conf ``` 1. Find this in the document: ```text Provisioning.MonitorHostName=y ``` 1. The value will be `n` when you find it, but change it to a `y` 2. Save by clicking `Ctrl+o`, then accept the file name by pressing _Enter_ 3. Then Exit by clicking `Ctrl+x` 4. Once done, run this command ```shell sudo waagent -install ``` 1. Now change the name to the full domain name that you’ll type in your browser. I used ```shell sudo hostname cheftutorialcompliance.southcentralus.cloudapp.azure.com ``` When you finish this step you should be able to type the command `hostname` and something like `cheftutorialcompliance.southcentralus.cloudapp.azure.com` should come up. This is the terminal I used. ![](https://ik.imagekit.io/hedgeops/site/article_images/2016-05-05-setting-up-compliance/11-ssh-to-vm.png) ### Set Up Chef Compliance on Your Virtual Machine Finally. After all of that work, we’re ready to actually put Chef Compliance onto our virtual machine. I used this [guide](https://docs.chef.io/install_compliance.html). 1. To download the package, go to the [download site](http://downloads.chef.io/compliance/) get the download URL for Ubuntu and copy and paste the link on a notepad or something to use in a minute. ![](https://ik.imagekit.io/hedgeops/site/article_images/2016-05-05-setting-up-compliance/01-compliance-download.png) 2. cd to the /tmp directory ```shell cd /tmp ``` 1. `wget` the download URL ```shell wget [download url that you just copied] ``` 1. As the [directions say](https://docs.chef.io/install_compliance.html), run sudo dpkg ```shell sudo dpkg -i /tmp/chef-compliance-.deb ``` Hint: Just type up to Chef, then hit tab to autofill. This will take a minute or so. 1. Run `sudo chef-compliance-ctl reconfigure` 2. This takes you to a license agreement. (Edited to add: They may have done away with this requirement.) - Hit any key. - Read it as you scroll all the way down to the end. - Then hit `q` to get out of the agreement. - You then need to agree to it, so type `yes`, and it will load the compliance server. - This will take a few minutes (if you got a slow, cheap machine like I did). ### Configure Chef Compliance Server So now that it’s all installed, it’s time accept the license agreement and set up an administrator user so that you can start using the product. 1. Navigate to your URL and add `/#/setup` to the end, make sure it’s `https` 2. Your browser doesn’t trust your server, so it’ll warn you not to go there. Just click on _Advanced_ and then accept the risk that it asks you to accept by clicking the link at the bottom. ![](https://ik.imagekit.io/hedgeops/site/article_images/2016-05-05-setting-up-compliance/14-accept-risk.png) 3. Click on _Setup Chef Compliance_ ![](https://ik.imagekit.io/hedgeops/site/article_images/2016-05-05-setting-up-compliance/12-chef-compliance-setup.png) 4. Accept the license agreement…again ![](https://ik.imagekit.io/hedgeops/site/article_images/2016-05-05-setting-up-compliance/15-pasted1.png) 5. Set up an admin user and click _Next_ ![](https://ik.imagekit.io/hedgeops/site/article_images/2016-05-05-setting-up-compliance/16-pasted2.png) 6. Make sure your info is correct and click _Configure_ ![](https://ik.imagekit.io/hedgeops/site/article_images/2016-05-05-setting-up-compliance/13-failed.png) The first time I went through, it said that the setup failed. But then I went back to the dashboard and logged in, and all was well. Who knows. 7. Go to the dashboard, and you’re ready to go! 8. Now go have a glass of wine and a chocolate chip cookie and pat yourself on the back. ## Concluding Thoughts I gotta admit, this whole process was a bit much for me. I couldn’t have done it without [Michael](http://hedge-ops.com). Once I got to the end, I was super surprised to see just how simple and intuitive the program was after such a complicated setup. I’m really excited to learn more about Chef Compliance, so in another post I’ll get to the fun part where we actually get to play around with it and see just what it can do. --- # I’m Annie, and I have an art degree. URL: https://hedge-ops.com/posts/introduction/ Join Annie as she embarks on a journey from art to technology, applying her creative skills in a new field. Follow her progress and learn from her experiences in this engaging blog series. So this is my introductory post, and I can’t shake that feeling like I just walked into the men’s restroom. It’s not because it’s [mostly men in technology](http://martinfowler.com/articles/born-for-it.html) (well, a little), but it’s because I don’t know how I got here and if I should exit quickly before making eye contact with anyone. I have a BFA in Film, a minor in Theatre, and I worked as a [Casting Director](http://www.imdb.com/name/nm1805484/?ref_=nv_sr_1) before leaving full-time employment for more flexible endeavors while I was raising babies. I did some blogging, and currently I have a home decorating business. [I’m ready to go back to full-time employment now](http://leanin.org/book/), but I also want a career change. The obvious career path for someone with my skill-set is probably HR or Marketing, and I get that. But here’s the catch—I don’t want that. I’d love to take the soft skills that I’ve gained through my creative pursuits and apply them to technology. I feel like I could contribute something original and learn to create at a level that I haven’t yet experienced. Over the course of the next few weeks and months I’ll be doing a series of small projects and sharing what I learn. I’m taking baby steps to get to where I want to be, but [I’m excited about the process](https://youtu.be/C13JC_YP2Q8). When I first started learning how to speak French the biggest hurdle was just getting out there and speaking it with people without being embarrassed. If I didn’t humble myself in that way, then I would have never improved. I made a ton of mistakes (and am still not totally fluent), but I have finally learned to not be embarrassed. This website for me is like speaking horribly broken French to native Parisians. It’s a little embarrassing, but I’m hoping that the upside will be lots of growth and understanding. --- # Promoting Cookbooks into a Private Chef Supermarket with TeamCity URL: https://hedge-ops.com/posts/promoting-cookbooks-into-a-private-chef-supermarket-with-teamcity/ Discover how to promote cookbooks into a private chef supermarket with TeamCity. Learn to control version dependencies, ensure cookbook availability, and streamline the approval process for external cookbooks. [We want the ability to control](/posts/my-advice-for-chef-in-large-corporations) which versions of which cookbooks we rely on and that those cookbooks are available to us even if the author removes them from GitHub. In fact with [the recent craziness on dependency management](http://www.theverge.com/2016/3/24/11300840/how-an-irate-developer-briefly-broke-javascript) and after listening to [an episode on availability on Arrested DevOps,](https://www.arresteddevops.com/availability/) I’m starting to think that this isn’t just for large organizations like mine. So to protect ourselves from that kind of craziness, we have created a [private Chef Supermarket](https://www.chef.io/blog/2015/12/31/a-supermarket-of-your-own-running-a-private-supermarket/) that we host all dependencies on. Then in our policyfiles, we specify that private supermarket as our default source for finding cookbooks. At first, to get us started, we manually uploaded the cookbooks we needed and got to working. Then as we scaled we got tired of people asking us to upload another version. On top of that we want to have a good, clean process for approving external cookbooks/code into our blessed environment. Here’s how we implemented it: ## 1: Synchronize GitHub with internal Git server We have an internal, corporately blessed git server we use, so we needed to get what was in GitHub into that Git server. For each of the cookbooks, we create a TeamCity [build configuration](https://confluence.jetbrains.com/display/TCD9/Build+Configuration) (that’s based on a [template](https://confluence.jetbrains.com/display/TCD9/Build+Configuration+Template)) that does just this with a simple [Command Line runner](https://confluence.jetbrains.com/display/TCD9/Command+Line) (that runs in Windows only at the moment): ```bash mkdir %Repository Name%.git git clone --mirror %Github Clone URL% cd %Repository Name%.git git remote add stash %Stash Clone URL% git push --all stash git push --tags stash ``` There are three variables that are [defined as parameters](https://confluence.jetbrains.com/display/TCD9/Configuring+Build+Parameters) here: 1. Repository Name: the name of the git repository, like `chef-client` 2. GitHub Clone URL: the URL to clone the repo on GitHub, like `https://github.com/chef-cookbooks/chef-client` 3. Stash URL: the URL to push the code to internally I had to go into our internal Git server and create a repo with the same name as the GitHub one so something could be pushed. I then [schedule this to run every day](https://confluence.jetbrains.com/display/TCD9/Configuring+Build+Triggers), and let it do its thing. If I got crazy I could make it run everytime there was a checkin on GitHub, but the model doesn’t _have_ to have immediacy to it. My repository internally will be reasonably up-to-date. ## 2: Create an internally approved branch based on a tag The next thing we do is create a new branch on our internal git server that outlines what we have code reviewed and have approved to be a part of our infrastructure. During the first setup, we first clone the repo on our local machine with the internal git server: ```shell git clone https://mycompanygitserver.com/chef-client.git ``` Then we simply run these commands: ```shell git checkout -b mycompany-approved v4.3.2 git push origin mycompany-approved ``` This creates our _safe_ branch, from which our promotion can occur. ## 3: Run cookbook build just as with other cookbooks The cookbook build will run as I outlined in [a different post](/posts/chef-cookbook-builds-in-teamcity). The only difference is the VCS Root that I pull will be off of the `mycompany-approved` branch created above. ## 4: Promote cookbook to supermarket Then I promote a cookbook to the supermarket using a TeamCity template that I use for all cookbook promotions, which is basically this command: ```shell knife supermarket share %cookbook_name% "Other" -o . ``` I had to ensure that the `knife-supermarket` gem was installed on my build server (of course, configured by Chef as well). Also, I parameterized the cookbook name so this could be inside a template that can be reused everywhere. The cookbook also has a [snapshot dependency](https://confluence.jetbrains.com/display/TCD9/Snapshot+Dependencies) to the cookbook build above, ensuring that it is only released to our supermarket when it passes the build. That keeps everyone honest. ## 5: Merge into approved branch as people request People will still request that we merge into the approved branch, which is locked down so that a smaller team can approve of the changes. We can use a pull request model to review and audit how this happens. ## Conclusion Doing it this way gave us the most control over which changes go into our infrastructure. It avoids the public supermarket all together, because we found that the packages posted on that server cannot be pushed to another supermarket. Even if that problem were fixed, this way is superior because it gives us the ability to code review and audit every dependency we have going into our system. --- # Orchestration Maturity Model with Chef URL: https://hedge-ops.com/posts/orchestration-maturity-model-with-chef/ Explore the orchestration maturity model with Chef, a configuration management tool. Understand the three phases of orchestration, from modeling existing processes to managing state declaratively, and finally, decoupling nodes for scalability. One of our [earliest questions](/posts/proof-of-concept) about configuration management tools is how we would do orchestration with them. We realized early on that with Chef the orchestration story was fairly weak, especially compared with something like [salt](http://saltstack.com/). But Chef’s [other benefits](/posts/technology-partnership)outweighed the weaknesses, so we moved forward. The whole time though I was confused about why Chef hadn’t invested more in orchestration. Salt and Ansible has it as a first class citizen and Puppet was [actively adding it to its product](https://docs.puppet.com/pe/latest/app_orchestration_overview.html). I didn’t really _get_ it until I listened to Julian Dunn’s [excellent presentation](https://www.youtube.com/watch?v=kfF9IATUask) on it at Ghent. Chef, as a company, is more interested in giving you what will work for you than giving you what you’re asking for. This is what makes them such a special partner for us. They’re more of a coach and less of an enabler. This has led me to think of orchestration as a maturity journey through three phases: ## Phase 1: Do it Like Before The first phase of orchestration will be to model how you have been doing things before. OK, I need to stop services, copy files, start services. That’s orchestration, right? At a surface level this is fine, but it leaves out the edge cases that happen when you’re dealing with a scaled infrastructure: - What happens when a node was down and didn’t get the message to stop, and then comes back up in the middle of your upgrade, and starts? - What happens when a new node is added at a time when you’re not doing an upgrade? Are any of those orchestration commands critical to the node itself? - Are you splitting configuration management between your configuration management tool _and_ your orchestration? If you are directly stopping a service, _then_ running Chef later, then your configuration management is leaking out of your system and into other places. ## Phase 2: Declaratively Manage State If we’re writing Chef recipes and starting from the beginning with some infrastructure, why live with the limitations of Phase 1? Why don’t we solve this problem? Thankfully with a tool like [consul](https://www.consul.io/) we can solve the problem by making some subtle changes: - Create a real-time shared data view of the state of your system (with consul, [zookeeper](https://zookeeper.apache.org/)) - Using this shared data view, define _all_ desired states of the system. So if you need to transition your web cluster from the states of: off, waiting, converged, set that in your key value store - Write your Chef recipe to define the desired state (resources) that are compiled _based on the desired state defined in the shared data view_. So you have an if statement that says _if we want this thing to be off right now, there is a service resource with action of `off`_ - Write an orchestrator that manages the state transitions between nodes in the environment _by updating the shared data view_. With consul, we can do a consul_exec on our nodes to force Chef to run. Or take it even further. And the orchestrator itself can be written through Chef. This gives you a number of benefits over the earlier phase: - If a node isn’t there when the state changes, it checks in and converges to the correct state, immediately! You _always_ get the node at the right state in the process because they are sharing the latest up-to-date shared data view - If a node is added, it will also converge to the correct state. It checks in and catches up immediately. Now you don’t have to worry about adding nodes and coordinating that with upgrades; things will just happen. - All configuration management are belong to Chef. Simple. ## Phase 3: Decouple the Nodes The unfortunate reality though, is that even after phase two you may not be ready for bursting and scale. In order for those capabilities to exist, you need to have services that are independent of each other. So it shouldn’t matter that your web tier is on a particular version and the database hasn’t caught up yet. The web tier should tolerate that reality. So you can then update them separately and not worry about it. I still think there is a role for real-time orchestration to happen in order to manage the portions of your infrastructure to go through a little at a time until all is upgraded. But the complexities of having to turn one layer off so another layer can do its thing should largely go away. Unfortunately this is really up to the software design itself to facilitate. Therefore, it’s really a business decision on whether that infrastructure should be made burstable and thus truly cloud-enabled. In some cases, we’ll only get as far as phase 2. In others, we’ll go all the way, but probably camp out at phase two while the software catches up. That’s the way it should be: let’s get there little by little. As long as we’re going in the right direction, we’re good. --- # Chef Cookbook Builds in TeamCity URL: https://hedge-ops.com/posts/chef-cookbook-builds-in-teamcity/ Explore how to standardize your Chef Cookbook builds in TeamCity. Learn how to set up project structure, version control settings, build steps, and more. Ensure quality gates for your infrastructure creation. As more and more teams are [coming on board with Chef](/posts/my-advice-for-chef-in-large-corporations), I’ve begun to standardize our pipeline and ensure that everyone meets quality gates for the infrastructure they are creating. This started with finally figuring out how to get [Test Kitchen working with Windows](/posts/test-kitchen-required-not-optional), then quickly migrated to getting it running in [TeamCity](/posts//christmas-with-teamcity). Our entire division uses TeamCity for configuration management, so it’s something that I needed to plan out carefully in order to make the Chef pipeline _feel_ like it’s a part of a team’s normal build process. ## Project Structure With this in mind, we created a Chef [subproject](https://confluence.jetbrains.com/display/TCD9/Creating+and+Editing+Projects) _inside_ each team’s _existing_ project. We want them to have ownership when Chef infrastructure breaks and to take action on problems, just as if the problem happened in their own software build. We then created a Chef Cookbook [build template](https://confluence.jetbrains.com/display/TCD9/Build+Configuration+Template) at the `` level that all cookbooks can use for their own builds. This template defines a cookbook parameter that enables the build steps below to know where the cookbook is in source. ## Version Control Settings We’re not really sure about how we approach testing at the moment when it comes to dependencies. If a cookbook is very young or if we are testing a lot of things at once, we might want to use relative path dependencies to other cookbooks. Or we might want to use data bags at some level. So we’ve decided on the build agent itself to mimic a Chef repo and then test it that way. We do this [through a checkout rule](https://confluence.jetbrains.com/display/TCD9/Build+Checkout+Directory#BuildCheckoutDirectory-Customcheckoutdirectory), like this: ```text +:.=>cookbooks/contributors ``` This means that the contributors cookbook will go to the cookbooks/contributors repo relative to build working directory. ## Build Steps ### 1. Run Foodcritic We want to do Chef linting first before we get into further testing, so we run [foodcritic](http://www.foodcritic.io/). This is done simply by creating a [Command Line runner](https://confluence.jetbrains.com/display/TCD9/Command+Line) with the foodcritic command: ![Run Foodcritic](https://ik.imagekit.io/hedgeops/site/article_images/2016-04-15-chef-cookbook-builds-in-teamcity/run-foodcritic-1.png) ### 2. Run Rubocop Once foodcritic runs, we want to finish our cookbook linting with [rubocop](http://batsov.com/rubocop/): ![Run Rubocop](https://ik.imagekit.io/hedgeops/site/article_images/2016-04-15-chef-cookbook-builds-in-teamcity/run-rubocop.png) ### 3. Run Cookbook Unit Tests I’m not a huge fan of [ChefSpec](https://docs.chef.io/chefspec.html) because I believe they mock too much out and end up not adding a lot of value. But I do think having at least one there that ensures that your code will converge is immensely helpful. It’s much better waiting the few seconds to ensure that code converges than the few minutes to wait for kitchen to tell you the same thing. So I put the step here: ![Run Chef Unit Tests](https://ik.imagekit.io/hedgeops/site/article_images/2016-04-15-chef-cookbook-builds-in-teamcity/run-chef-unit-tests.png) _Update: actually just before this published, I removed this. The Chef Spec unit tests required too much ruby expertise to be helpful. Plus people are working well with kitchen and learn to rely on it instead. So as of yesterday, this step was removed._ ### 4. Run Test Kitchen And now for the magic! I need to [run Test Kitchen](/posts/test-kitchen-required-not-optional). If I’m using vagrant, I need to have a physical build agent to do this on. [If I’m running azure](/posts/tutorial-for-test-kitchen-with-azure), I need to have some credentials set up on the build agent. All of that configuration is handled through Chef itself, so at this point all I need to do is run the command itself: ![Run Kitchen Test](https://ik.imagekit.io/hedgeops/site/article_images/2016-04-15-chef-cookbook-builds-in-teamcity/run-kitchen-test.png) Kitchen test will do a `create`, `converge`, and `verify`. It runs through the whole process. And I’ve tested that if it fails, the build will fail. ### 5. Kitchen Destroy If the above test fails, it’s important to not keep the virtual machine running. This is especially true if I’m using the azure runner. So at the end I’ll call kitchen destroy, and _always_ call it, even if the previous command failed: ![Run Kitchen Destroy](https://ik.imagekit.io/hedgeops/site/article_images/2016-04-15-chef-cookbook-builds-in-teamcity/run-kitchen-destroy.png) ## Build Agent Setup As I mentioned earlier, our build agents are set up through Chef itself, so configuration of them is easy. Since we are creating our Chef Projects inside the product’s projects, we don’t want to mix their build agents with the Chef ones. We keep them separated because we let each team have their own build agents that they manage. To solve for the mix, we add the Chef subproject set up above to our own Chef build [agent pool](https://confluence.jetbrains.com/display/TCD9/Agent+Pools). Then in our template, we add a [build agent requirement](https://confluence.jetbrains.com/display/TCD9/Agent+Requirements): ![Chef Cookbook Requirement](https://ik.imagekit.io/hedgeops/site/article_images/2016-04-15-chef-cookbook-builds-in-teamcity/chef-cookbook-requirement.png) In our recipe for the build agent, we set this environment variable, so this limits our cookbook builds to only run on build agents on which our Chef recipe has run. ## Triggering Finally, we want to trigger this cookbook build whenever something in the cookbook is checked in. We do this through adding a [VCS trigger](https://confluence.jetbrains.com/display/TCD9/Configuring+VCS+Triggers) with the default settings to the template. ## Conclusion With the template in place, it takes about ten minutes to add a team’s cookbook to be fully tested and built within their own environment. It feels very much like a software build, which is fantastic for everyone because it reminds us that the infrastructure code we are creating is like any other code; it should be subject to automation just like the rest. --- # chef-vault - Tutorial from Beginner to Expert URL: https://hedge-ops.com/posts/chef-vault-tutorial/ chef-vault is the build-in secrets management system for Chef. Become an expert with this post, and learn when to use this vs. Hashicorp Vault. [`chef-vault`](https://github.com/chef/chef-vault) is the built-in secrets management system for [Chef](/posts/chef-community). This post is for people who may have struggled [with the documentation](http://docs.chef.io/chef_vault.html) and want a simple walkthrough. Once finished with this tutorial, you should be able to implement `chef-vault` in a compliant way for a security conscious enterprise. ## Why `chef-vault`? [Encrypted data bags](https://docs.chef.io/data_bags.html#encrypt-a-data-bag-item) force you to copy the shared secret that is used for decryption to your infrastructure. It’s very easy to take that secret file and nefariously decrypt the data from somewhere else without anyone knowing. Chef-vault makes this much more difficult by giving both nodes and Chef server users expressed permission to decrypt certain data. With `chef-vault` you don’t have to share a secret file with all of your nodes. This is a step up that simplifies everything. The solution isn’t without its drawbacks. The main one is if you add nodes, you have to rerun something on the server to get that node to be able to decrypt the data bag. With [Hashicorp’s vault](https://www.hashicorp.com/blog/vault.html) you get better control over that, and better lease management, and credentials creation. To me, encrypted data bags are like an unreliable used car, `chef-vault` is a nice mid-size sedan, and Hashicorp’s vault is like a luxury car. So now that we know where the tool sits within our choices, let’s look at the basics: ## Setup To get started with `chef-vault`, have the latest [Chef Workstation](https://community.chef.io/downloads/tools/workstation) installed and install the [`chef-vault` gem](https://rubygems.org/gems/chef-vault/versions/2.8.0): ```bash chef gem install chef-vault ``` And then ensure you have a `.chef` directory that connects to a Chef Server. ## Creation Creating a vault is easy. This creates a vault called `passwords`: ```bash knife vault create passwords root -S "policy_name:webserver" -A "michael" -J root.json -M client ``` For whatever reason the `knife vault` command doesn’t default to talk to a Chef Server. So to create a knife vault, you have to specify `-M client` at the end which connects to your configured Chef Server. Or you can make your life easier going forward by adding this line to your `knife.rb`: ```ruby knife[:vault_mode] = 'client' ``` For the command, I used this `root.json`: ```json { "username": "mhedgpeth", "password": "myPassword" } ``` This uploads _two_ data bag items to a data bag called `passwords`: 1. The `root` data bag item has the data above, encrypted 2. The `root_keys` data bag item stores the metadata about which clients can read and edit the `root` data bag item (as you specified above in the search criteria `-S` and administrators list `-A`). ### Making it Even More Secure If your onboarding approach isn’t completely locked down, [nodes are able to declare](/posts/policyfiles) their own `policy_name` and therefore could access these secrets as they join this group. If this concerns you, specify each node explicitly through the -A flag. So your command would be: ```bash knife vault create passwords root -A "michael,webserver1,webserver2" -J root.json -M client ``` ## Viewing a Vault Now that we have created a vault, let’s view it: ```bash knife vault show passwords root -M client ``` which will output: ```text id: root password: myPassword username: mhedgpeth ``` It lets me view it in cleartext because I am one of the administrators on the vault itself. If I want, I can even view it in JSON if you want to move the file to another Chef Server: ```bash knife vault show passwords root -M client -Fjson ``` ## Viewing Encrypted Version To view the encrypted version of the vault, you can simply use the normal commands for viewing data bag, just realizing that the vault data bag also has a `_keys` item too: ```bash knife data bag show passwords root ``` and ```bash knife data bag show password root_keys ``` Will show you lots of encrypted goodness which I will not show. The keys are helpful to see what clients are connected to it. ## Adding nodes Probably the weakest part of `chef-vault` is what to do when you add nodes. If your nodes grow and shrink dynamically this can be dicey, because when you add nodes, you have to run this command to generate keys for those nodes to read the encrypted data: ```bash knife vault refresh passwords root --clean-unknown-clients ``` This updates the `root_keys` encrypted data bag with information on the nodes that now match the search criteria. So it’s important to know that the nodes that can read a vault is a snapshot in time based on the search criteria, not a dynamic list. If you aren’t using a search criteria, you’ll need to add nodes to the administrators list itself: ```bash knife vault update passwords root -A 'newnode,newnode2' ``` ## Rotating keys You might want to rotate the key that encrypts the data in the data bag. The way this works is the clients use their own key as a private key to combine with the public key on the Chef Server to decrypt the data bag’s key. That key encrypts the real data bag. This command will change that key: ```bash knife vault rotate all keys ``` ## Cookbook Development What use is a data bag without using it in a cookbook? To be able to deal with this data bag in the cookbook, include the `chef-vault::default` recipe in your runlist. Then you will have the `chef_vault_item` method that you can call like this: ```ruby item = chef_vault_item("passwords", "root") password = item['password'] ``` Using `chef_vault_item` will make your cookbook more testable by test kitchen (see below). ## Version Control With data bags, we like to have a data_bags repository that we use to promote shared data and version control changes. This kind of thing doesn’t work with `chef-vault`. Instead, you get a small team that can update the vault and then have them manually do it. This isn’t ideal, but secrets are hard and, as I wrote above, using a dedicated secrets management tool like Hashicorp Vault will keep you from that level of work. ## Kitchen Support To make this work in kitchen, just put a cleartext data bag in the `data_bags` folder that your kitchen run refers to (probably in `test/integration/data_bags`). Then the vault commands fall back into using that dummy data when you use `chef_vault_item` to retrieve it. ## Conclusion The `chef-vault` functionality is compelling enough for serious consideration in simple use cases. I would never recommend using encrypted data bags, because the support for chef-vault is more sophisticated without adding a lot of complexity. It’s the right solution for chef secrets when Hashicorp Vault is too complicated or expensive. --- # The Power of Precedence URL: https://hedge-ops.com/posts/the-power-of-precedence/ Explore the power of precedence in driving change initiatives. Learn how to avoid the trap of the ‘Perfect State’ and instead use real-world examples to address concerns and increase profitability. The other day a friend and colleague was working on a central initiative and asking me for help. He couldn’t get any traction on this initiative because all the people who would need to approve of this initiative kept bringing up more and more issues. The groups were caught [in a common trap](/posts/all-or-nothing-changes) of any change: let’s make [the Perfect State that Everyone Will Adhere To](/posts/the-grand-vision). The stakes of the whole endeavor are raised when we arrive at that point. Everyone is now thinking to themselves, “This is my last chance to have any say in how this works; I better get all my concerns addressed before moving forward with it.” [In a large enterprise](/posts/my-advice-for-chef-in-large-corporations), this means that there will be tons of meetings, tons of confusion, and a lot of wasted time. I recommended to my friend to use the power of precedence to his advantage. Don’t fall for the trap of making the Perfect State. Instead, find a team with a serious business problem that your solution will address. Use that team’s leverage within the organization to get your change operational. Repeat this process. Pretty soon you will have a lot of teams using solution, getting obvious value out of it. With this strategy, as concerns come along, the concerns are rightly within a business context and not some mental exercise. This keeps everyone focused on doing what we’re paid to do: increase profitability through increased efficiency, increased revenue due to faster speed of market, and lower risk. And, since you’ve been doing this in the real world, as concerns arise, you can say, “Let me show you how we do it.” You have precedence on your side. This isn’t an ivory tower exercise. You’re not hopping from one visio document to another to get everyone on board before you try something out. You’re doing [incremental experiments](/posts/measure-for-reality) which are leading you to increased profitability and lower risk. This is creating a flywheel of change for the organization, within which people can express their concerns and thus become a part of the process to greater profitability and decreased risk. This is what a functional change initiative looks like. Everything else, unfortunately, is usually theater. --- # The Missing Compiler/Unit Test for Feelings URL: https://hedge-ops.com/posts/the-missing-compilerunit-test-for-feelings/ Explore the importance of understanding and influencing how others perceive you in a technical environment. Learn why there’s no compiler or unit test for feelings, and how to navigate this challenge. As technical people we become fascinated early on by [the](/posts/christmas-with-teamcity) [numerous](/posts/getting-things-done-action-plan) [tools](/posts/my-advice-for-chef-in-large-corporations) out there that make our jobs easier. A compiler or syntax checker is almost an afterthought these days: of course you would want some guidance on whether your code was in compliance with the language before running it. Unit tests are the same way; if I can run something and get a good answer on whether it is OK _before_ I put it in a shared environment, then [that will save me tons of time](/posts/test-kitchen-required-not-optional). Unfortunately, there isn’t a similar tool for how people feel about you. A technical person can fall into the trap of looking at more technical-oriented clues to how people feel, but these often fall short: | Team | Natural Alignment | Natural Misalignment | | --------------------- | ------------------------------------------------------------------------------------------------ | --------------------------------------------------- | | Development | Faster Delivery of features | Have to be engaged in operations, more _work_ to do | | Operations | Less fires, more consistency | Have to learn a new skillset and be a beginner | | Security | More consistency, compliance | Automation can cause unknown vulnerabilities | | Business Stakeholders | Faster ROI for development, lower cost for operations, and a scale model that works | Takes ongoing investment in culture and tools | The first step in keeping this kind of thing from happening to you is to care about people. [Put them before yourself](/posts/technology-partnership). Follow the golden rule. [Don’t be an asshole](/posts/the-technical-asshole-curse). The second step is to understand and embrace the true value in what people think about you when you’re not there. You can’t control this part, you can only influence it. If they think you’re an idiot or an asshole, work to show them that you have their best interests in mind and that you’re growing. Are you including them when we are solving problems, so _they_ own it and won’t blame you in disgust if something goes wrong? That approach is fraught with difficulty, especially for someone with a technical background. There is no compiler for it. There is no unit test. There isn’t even a clear answer on how people exactly feel about you. The only thing you can do is care about it and try to influence it positively. In technology, caring about this is so uncommon that it ends up going a long way. --- # The Technical Asshole Curse URL: https://hedge-ops.com/posts/the-technical-asshole-curse/ Explore the ‘Technical Asshole Curse’ in the tech industry, where hard skills overshadow soft skills, leading to a toxic work environment. Learn how to avoid this trap and foster a healthier workspace. I’ve seen it happen over and over again, and [I fight it in myself every day](/posts/all-or-nothing-changes). I call it the Technical Asshole Curse. We’ll illustrate the curse by following our friend Joe through his career. It’s starts so innocently: Joe cares. He wants to make a difference. He reads up on the best way to do something, and stands up in the important notice and gives management a way out of this mess. Joe introduces test-driven development, or a better way to provision infrastructure, or [a great new devops tool](/posts/intrinsic-motivators-leading-to-chef). People take notice. Joe then gets promoted. People start coming to Joe for help on all things related to his expertise. Joe is now the go-to person for that topic. It’s important to stop here and note an important truth that Joe doesn’t quite get: Joe is respected and given praise because of his _hard skills_, not his _soft skills_. In other words, people think he brings value to the organization because of his knowledge and not because of how he treats people. In fact, if people were honest with Joe, they would tell him that they are often uncomfortable with how Joe treats them. Sometimes Joe gets frustrated in meetings and talks over people. Other times Joe answers emails with no empathy or understanding. But Joe is unaware of these problems, because everyone is so enamored with the technical value that Joe is bringing to the table. As Joe’s career grows, the business builds a team around Joe to insulate him from those who wouldn’t get him: other teams, top management, and especially the customers. Joe doesn’t care and thinks this is a good thing. But this is not a good thing: Joe is surrounded by people that think Joe is an asshole and that it’s their job to keep that fact from hurting the business. Joe’s colleagues go home from work and talk about how much they despise Joe, but how there is nothing they can do about it because Joe is so valuable to the company. The more they feel trapped, the more their secret resentment toward Joe grows. This is what happens over and over again when people only find their value in their technical contributions and ignore their interpersonal contributions. It’s a curse that happens to good people who stand up and make a difference, but fail to properly appreciate that the difference they seek combines an excellent approach with a group of people who feel great about working together. Businesses will rarely tell people that they’re an asshole. Rest assured: [they will talk about it when the asshole isn’t present](https://www.youtube.com/watch?v=Rt86dc6EIoY). And many times the fix will do nothing to help the asshole get some help. Instead, many times the fix will enable the asshole to become more of one, while the cycle of resentment and frustration descend into further and further depths. It’s so easy to be an asshole in technology. In order for your idea to get through, you have to care, and you have to tell people that they are doing it all wrong. If you’re not careful, you end up thinking their feelings are an unnecessary detail. That will be so destructive for your career if you let it fester. Instead, take people to lunch. Find out what they care about and empathize with their problems. Let _them_ come up with the solutions and help them out. Make it OK to be wrong about stuff. In short, don’t be an asshole. --- # Feelings > Compliance URL: https://hedge-ops.com/posts/feelings-compliance/ Explore the importance of considering feelings over compliance in leadership. Discover how fostering respect and excitement can lead to more effective collaboration and success. On the surface, leadership looks like an exercise in getting as many people compliant with one’s vision and direction. A naive leader will find themselves forcing others to comply to their policies, their direction, their demands. A while back I was in a meeting with a colleague going over the particulars of how I was going to implement a particular project. He started coming up with what I thought were dumb requirements, but I didn’t think it would help things to fight him, so I dutifully wrote the requirements down and told him that I would implement them. We went back and forth a few times to make sure I had it, and I explained that I had it and that I was going to implement the requirements the way he wanted them. On the outside, I was compliant to my colleague’s wishes. On the inside I was frustrated and wanted to exit the conversation. It’s so easy to dismiss how people feel when I work with them. I’ve come to realize that their feelings about what I’m doing are more important than how much they outwardly follow my direction. If I interact with people, and they come away from it feeling respected, listened to, and excited about what is possible, then I don’t have to worry about their compliance. On the other hand, if they are compliant but secretly think I’m an asshole, then they will probably do whatever they can to do the absolute minimum and will be happy when I fail. I find it easier to _start_ with one’s feelings. How do you feel about this change, project, or challenge? What can we come up with together that will address some of those things? If we have that conversation, then they are less likely to be externally compliant yet secretly hostile. --- # Tutorial for Test Kitchen with Azure URL: https://hedge-ops.com/posts/tutorial-for-test-kitchen-with-azure/ Learn how to run Test Kitchen with Azure in this step-by-step tutorial. Discover the benefits of using Azure, how to set up, and the commands to use. Ideal for those seeking a hassle-free testing environment. As I wrote in [the last post](/posts/test-kitchen-required-not-optional), Test Kitchen as one of the [things that attracted me to Chef](/posts/learning-chef-book-review). There was a problem, though: running Windows on virtual machines automatically is difficult. I’ve spent quite a bit of time trying to create a vagrant image [using Matt Wrock’s excellent blog](http://www.hurryupandwait.io/blog/creating-windows-base-images-for-virtualbox-and-hyper-v-using-packer-boxstarter-and-vagrant) as a resource, and haven’t quite gotten it there yet. Plus, if I go the vagrant route, people have to have powerful machines on which to run test kitchen. The more I worked through that option, the more I because discouraged and dismayed that this may just never work for us. And then I discovered azure. Don’t get me wrong: I’m not a Microsoft fanboy. But there are some great advantages to going this route: 1. Through my Microsoft-friendly workplace I get a MSDN Subscription, [with which I get $50/month credit to use azure](https://azure.microsoft.com/en-us/pricing/member-offers/msdn-benefits/). So this is free, and I can run test kitchen on not-my compute resources. 2. Microsoft by definition is going to get Windows images right. So I don’t have to fight it anymore. I can just use it. It just works, just like it should. 3. [Stuart Preston](http://stuartpreston.net/) wrote a plugin that gets anyone past the learning curve very quickly. With this plugin you don’t have to really know anything about azure to use it for test kitchen These reasons are so compelling, this is what our teams will be going with in the coming months. It’s critical that everyone be able to run test kitchen easily, and azure gives us the best shot at doing that without a lot of drama. Setting up was easy: 1. [Activate your subscription from your MSDN account](http://blogs.msdn.com/b/msgulfcommunity/archive/2014/09/15/how-to-activate-azure-benefit-for-msdn-subscribers.aspx) 2. [Install the Azure CLI for Windows](https://azure.microsoft.com/en-us/documentation/articles/xplat-cli-install/) 3. Follow the directions [on the kitchen-azurerm main page](https://github.com/pendrica/kitchen-azurerm) to set up a Security Principal, Tenant, Password, and configure it in your user directory 4. In a simple cookbook, create [a simple kitchen.yml file](https://gist.github.com/mhedgpeth/a70ef0a7edf01d9c7ed2) like this: ```yaml --- driver: name: azurerm driver_config: subscription_id: <%= ENV['AZURE_SUBSCRIPTION_ID'] %> location: "South Central US" machine_size: "Standard_D1" provisioner: name: chef_zero verifier: name: inspec platforms: - name: windows2012-r2 driver_config: image_urn: MicrosoftWindowsServer:WindowsServer:2012-R2-Datacenter:latest transport: name: winrm - name: centos71 driver_config: image_urn: OpenLogic:CentOS:7.1:latest suites: - name: default run_list: - recipe[contributors::default] attributes: ``` It’s really that simple. Now I can run test kitchen commands: | command | description | | ---------------- | ---------------------------------------------------------------------- | | kitchen create | creates azure infrastructure for running, powers on machines | | kitchen converge | does kitchen create if needed, will converge the node using Chef | | kitchen verify | does create and converge if needed, runs the tests that you’ve written | | kitchen test | does everything: create, converge, verify | | kitchen destroy | don’t forget this one; it removes the resources | There you have it, go through those easy steps, and you have Kitchen working with Azure. --- # Test Kitchen: Required, not Optional URL: https://hedge-ops.com/posts/test-kitchen-required-not-optional/ Explore the importance of Test Kitchen in Chef workflow. Learn how it’s not just for testing, but an essential part of coding with Chef. Don’t miss out on this essential tool for your infrastructure. When I first started reading through [the Learning Chef book](/posts/learning-chef-book-review) I became quite fascinated and enamored by [Test Kitchen](http://kitchen.ci/). The community created such a wonderful way to introduce testing into their workflow. That’s fantastic! Integration and support of Test Kitchen was one of our reasons for [partnering with Chef](/posts/technology-partnership). We had a way to create a test-driven infrastructure, which would be essential to truly scaling our automation to fit our vision. But, I reasoned, for now we would leave it out of the picture, so we can focus on the more important tasks like developing cookbooks and establishing a [change-management workflow](/posts/my-advice-for-chef-in-large-corporations) that fit our broader security model. I now see that I was looking at this all wrong. The choice to forego testing is a common one: teams often make sure then have a core idea that will work before they invest in testing. Then they pivot very hard into the testing direction when the core is there. This is the direction I took, largely because of how we couldn’t easily get Windows, Test Kitchen and vagrant to work together. I changed my mind when I recently tried to work with a group of 25 people to learn Chef. In the workshop I asked people to set up a virtual machine somewhere, copy stuff over, get it on a Chef server (or run it in local mode directly) and then watched them struggle with the nonessential details and not get much done. The reality then dawned on me: Test Kitchen is the only efficient way to run your cookbooks. It’s not for testing first. It’s for running first. If you are a developer, you’re used to coding a little and running a little. The reality all developers discovered decades ago is that you’re not going to get very far with coding unless you are running your code frequently. Since Chef runs on an infrastructure, it’s much more difficult to run. You have to run it on a virtual machine. This is what Test Kitchen is for. Using Chef without Test Kitchen is like opening a restaurant and inviting everyone to taste the food without practicing with your kitchen staff first. No one would do that because it would fail miserably. The restaurant would spend a massive amount of time getting feedback on a product that they can’t trust is ready for external consumption. So my next task is to get us up and running with Test Kitchen. I now know that it’s not just a nice tool for testing; it’s an essential part of coding with Chef. --- # All or Nothing Changes URL: https://hedge-ops.com/posts/all-or-nothing-changes/ Discover how Cognitive Behavioral Therapy can help combat all-or-nothing thinking. Learn to balance work, diet, and change initiatives for a healthier mindset. [Cognitive Behavioral Therapy](http://www.amazon.com/Feeling-Good-New-Mood-Therapy-ebook/dp/B009UW5X4C/ref=sr_1_1?s=books&ie=UTF8&qid=1452553054&sr=1-1&keywords=feeling+good) has taught me the danger in all or nothing thinking. If I enjoy my job, perhaps it isn’t a good idea to work 80 hours at it. If I fail at work, then _all is not lost_. If I want to lose weight, perhaps the best thing to do is stop eating cookies. Maybe if I try to limit my processed food intake to zero I will freak out and eat three boxes of Oreos from the store. All or nothing thinking is the single most formidable thinking error component in my life. So it’s only natural that I would fight it in change initiatives at work. When I get an idea for a change, I become obsessed. I read everything I can about it. I watch videos. I absorb the technology. Then I come up with [a grand vision](/posts/the-grand-vision) about how everything can be different. It’s exciting. But if I sell people on _that_ grand vision, I’ve fallen into a trap by handing the opponents of the change a huge weapon. If what you’re selling is perfection (or _all_ in this case), then all it takes is for someone to demonstrate how your tool can’t possibly solve all the problems that we have. If instead you’re selling that a change will create a measurable impact for some of our major problems and opportunities, of course, the change can be scaled elsewhere. But let’s not focus on that right now. Let’s make something work. Let’s show everyone that this is for real. Let’s avoid _nothing_. But let’s avoid _all_, as well. Now that I don’t have to create the perfect utopia I will brag about to my grandchildren, I can solve real problems. I can be flexible with my solution architecture. I can be more flexible with tools. I can solve real problems and create a platform for even greater change. Change never happens in the single inspirational speech. It happens every day, little by little, one problem at a time. --- # The Power of Culture in Cross-Discipline Change Initiatives URL: https://hedge-ops.com/posts/the-power-of-culture-in-cross-discipline-change-initiatives/ Explore the power of culture in driving cross-discipline change initiatives. Understand the roles of developers, SecOps, and Operations in a business and how to foster cooperation. When I started my career, I was rewarded for [being creative](/posts/christmas-with-teamcity), stretching the boundaries, getting changes through the system to [bring more revenue to my company](/posts/funding). I was a developer. When my SecOps colleague started his career, he was rewarded for keeping people like me from destroying the business with poorly planned implementations that make us vulnerable to attacks. When my Operations colleague started her career, she was rewarded for taking the crazy ideas that the developer wanted to implement and translating them into something that _will actually work_, subject to the rules that the SecOps person dictates. In the natural state, we have all created value in our careers by trying to work around the flaws of the other groups. To the developer, SecOps and Operations needlessly slow everything down. To SecOps, developers are dangerous and operations are unreliable. To Operations, SecOps are paranoid and developers don’t have a clue. I vastly underrated the power of these cultural scripts when first initiating our change initiatives around DevOps and automation. In fact, I mindlessly continued to follow my script. I went to SecOps with the attitude of: _here’s this awesome change I want to do that will change our business, please approve of it_. They followed their cultural script with the response of _oh look here is a developer who just walked in with a weapon that can wipe out our entire business_. There is no partnership there; there is only conflict. And unfortunately, conflict is what I began with. [I’m still working to undo the damage](/posts/my-advice-for-chef-in-large-corporations) I did in those early days. Instead of the attitude I learned as a developer, I should have taken an attitude of a business person: _What are the problems that are or have the potential to drag down revenue and increase costs, and how can I help fix them_? It turns out that SecOps and Operations both have extremely valuable roles, and they aren’t getting in the way of my awesome developer changes. They have problems just like the rest of us, and if I take the time to understand them, perhaps we can partner and solve them together. Instead of coming to SecOps to try to get approval for the tool, why don’t I start with their compliance challenges and how we solve those? If I can use a tool to get their system more compliant, then that’s a _better_ baseline from which we can do some other great things, like configuration management. Instead of coming to Operations to merely implement the tool, why don’t I start with the problems they are having and iteratively help them solve those problems? Instead of just relying on the development teams, maybe I should start going to the Change Advisory Board meetings and then show up when the deployment happens. Then after that I can follow up and say, “For a couple of days of work, we can automate that. How does that sound?” All of a sudden I go from being a _developer who doesn’t get it_ to _partner who will make my life easier_. When the cultural roles shift away from conflict and towards cooperation, magical things will happen. I’m working like crazy to make that happen right now. --- # The Overdependent Organization URL: https://hedge-ops.com/posts/the-overdependent-organization/ Explore the dangers of overdependence in organizations through a personal journey. Learn why successful projects shouldn’t rely on one person and the importance of a strong, self-sustaining foundation. Years ago we went to a small church that met in a boys and girls club in our city. At one point I was leading the music on Sunday mornings, running our new visitor/assimilation program, attending a small group during the week, and leading the youth ministry of 10–20 students. This all happened while I had a full time computer programming job, our first child was born, and we were contributing 10% of my pre-tax income to the church. I eventually got tired of all of this and decided it was time to move on. Within a month the church ended. There wasn’t a banner saying, _Michael is leaving this church…we’re shutting down_. In fact the story is much different than that. But I couldn’t help but wonder what would have happened if I chose to leave a few years earlier. Would things have lasted so long? I’m inclined to think not. This experience has caused me to look at dependence on me in a different light. If I am _the only reason_ a project is staying afloat, it’s time to get out. Things that are worth doing don’t depend on one person to do them. They have followers, believers, excited people who will step up and fill in when you move on. If I leave a project and see that it completely implodes, it’s a sign to me that I was shielding the organization from seeing the truth: that the initiative was fundamentally flawed and I was keeping it afloat. What I’d prefer to see is moving on from something that _gets better_ after I leave. I laid a foundation, and there was a true solution to a true problem that didn’t require my absolute attention. That’s the ultimate goal on anything I do at the moment. --- # Proof of Concept URL: https://hedge-ops.com/posts/proof-of-concept/ Explore the importance of including culture and security in a proof of concept. Learn how these factors can influence the successful implementation of new tools or changes within an organization. It’s a classic scenario: a group of people [want to use a tool,](/posts/dont-start-with-tools) but before they can, they do a proof of concept (or POC) with the technology as a means of showing that it will do what it says it will do. In the past the proof of concept was purely a technical issue: how does the tool act when working with the various use cases we’ve identified? We create a sandbox and show the value at a demo or two, then we’re ready to move forward. It’s easy to stop there. But I’ve learned to take things further. The proof of concept should include at least two other aspects beyond technology: _Culture:_ can you _prove_ (through an experiment or two) that the people who will be involved with the new tool or change will be engaged as expected? It’s nice enough to have the person who is excited about the technology get it to provide value, but can an average person within the organization? _Security:_ how does your SecOps team feel about this change? Are they on board with it or resisting it? Will they cooperate to the extent that they can put it in production in a limited capacity? For [our Chef initiative](/posts/intrinsic-motivators-leading-to-chef) both of these elements were concerns that were outside the scope of our proof of concept. If I were to do things again, I would have put them in scope. That would have better clarified and added urgency to all elements that posed a risk to the change. A good test for a true proof of concept is whether you can get something running in production. If the answer is yes, then you are probably ready to go. If the answer is no, then watch out for what you’re buying and how easily you will be able to roll out the change you are promising. --- # Building Alliances URL: https://hedge-ops.com/posts/building-alliances/ Discover the importance of building alliances to effectively persuade people, especially in challenging situations. Learn from real-life experiences and expert advice on fostering a supportive team. The other day I met with a friend and colleague about why a particular meeting went badly. The meeting was a few weeks back with a team that isn’t exactly thrilled about an initiative I’m championing. I knew this and basically did a demonstration of the proposed change, during which I was peppered with question after question. You know when you’re in a meeting and everyone is going along with it and the questions are constructive and getting everyone more excited about the possibilities? Well, this wasn’t that meeting. So I’m following up with this friend, and he’s telling me that I should have met with him and his team before I met with the more hostile team. That way he could have known the context of my proposal and jumped in with his perspective in a way that didn’t make it _the team vs. Michael_ but made it more of a discussion. It was as if I put my chess piece unguarded on the other side of the board; I’m going to get killed. I need to have alliances if I’m going to effectively persuade people, especially those more naturally hostile to my proposal. I was reminded of this recently when I read through [Mandi Walls](https://twitter.com/lnxchk) really great and short book [Building a DevOps Culture](http://www.amazon.com/Building-DevOps-Culture-Mandi-Walls-ebook/dp/B00CBM1WFC/ref=sr_1_1?s=books&ie=UTF8&qid=1452606188&sr=1-1&keywords=building+a+devops+culture). She writes about the need to have a team of people around you that will help you roll out your DevOps initiatives: > This person, or team of people, will serve an important role, a combination of evangelist, tools expert, process > subject-matter expert (SME), and buddy. They’re like your camp counselors. They’ll teach a little bit, answer > questions, > reassure the reluctant, and bring the marshmallows at the end of a long day. These folks are hard to find, and often > aren’t who you think they are. The last person you want for a job like this is someone who’s smart but a complete jerk > everyone hates but who happens to know how to use the tools. Find the people who everyone wishes were on their team, > borrow some of their time, and form a working group to help other teams. Mandi is right. At first, I thought of myself as the champion of this thing. Then I very quickly met John who was better liked, more experienced, and more passionate about all of this than I am. John has delivered great results for both me and the company. He is so valuable to what we’re doing that I won’t use his last name, won’t link to his LinkedIn profile, and won’t even say whether I have changed his first name. John is the type of person I want on my team. Going forward if I am going into a potentially hostile situation or into a group of people who aren’t naturally inclined to go along with what I’m trying to do, I’ll surround myself with liked, credible people who are able to fill in where I can’t. --- # Technology < Partnership URL: https://hedge-ops.com/posts/technology-partnership/ Explore why our organization chose Chef over other configuration management tools. It’s not just about the technology, but the partnership and cultural changes that lead to success. Last year our organization [made a major decision to use Chef](/posts/intrinsic-motivators-leading-to-chef) for our configuration management. People often ask me why we chose Chef over Puppet, SaltStack, or Ansible. I tell them that we chose Chef over the others because they have a better sales organization. I’m only half kidding. I’ve come to believe that the essence of success with change initiatives and tools is [not about the underlying technology](/posts/dont-start-with-tools). Chances are if you have a [funded popular tool](/posts/funding) it’s going to have some cool technology. All of the above was cool. Instead, the essence of success with change initiatives is managing the organizational and cultural changes needed to make it to the other side. Chef is a great partner in helping us navigate our change through the various parties and [into a model that rapidly delivers value for our business](/posts/my-advice-for-chef-in-large-corporations). I’ve been extremely impressed with their organization at all levels and don’t think I would be where I am without their help. So don’t start with the tools. Start with the people, the culture, the process that will lead to safe, repeatable, high-velocity change. If you find a tool that will partner with you in that discovery, you probably have a world-class tool. If you start with the tool and ignore the other things, you might end up with a cool tool that no one wants or understands. --- # Why Before What URL: https://hedge-ops.com/posts/why-before-what/ Explore the importance of establishing a shared vision before implementing change in an agile transformation. Learn why understanding the ‘why’ is crucial before jumping to the ‘what’. Recently a colleague and I were talking about his team’s agile transformation. He is excited about what’s to come but was challenged to bring everyone together in a meaningful way that leads to the change he wants to see. We talked about the importance of starting with a [shared vision of a problem](/posts/who-is-with-you) and [iteratively working toward a solution](/posts/measure-for-reality). It seems easier to just start with the end: “We’re going Agile!!!!” and watch everyone fall in line. Unfortunately it never happens that way. I was reminded of this while recently reading [Building a DevOps Culture](http://www.amazon.com/Building-DevOps-Culture-Mandi-Walls-ebook/dp/B00CBM1WFC/ref=sr_1_1?ie=UTF8&qid=1452565552&sr=8-1&keywords=building+devops+culture) by [Mandi Walls](https://twitter.com/lnxchk). She writes: > Proving change is necessary requires some legwork. It’s fine to want to change your organization because _everyone_ is > doing DevOps now, but you’re looking at months of work, new tools to learn and implement, teams to restructure. These > costs must be outweighed by the benefits, so you have to be able to put real value on your processes. > > Articulating upfront what your goals are will help you with other phases of your DevOps roll out. This is exactly the work I’ve been doing related to Chef over the past year. Recently we started talking about improving monitoring as well. It’s easy to say, “We’re going with New Relic!” Or, “We’re going with App Dynamics!” I’ve resisted that siren song, though, to really dig into what the problems are and what specific solutions fit the problems. When the problems are clear and measurable, the solutions have alignment and buy-in, and funding is easy. Without those core components, I’m afraid the issue is doomed. So I’m following Mandi’s advice and doing the hard work up front to define a shared vision of the problem. --- # Don’t Start with Tools URL: https://hedge-ops.com/posts/dont-start-with-tools/ Explore why it’s crucial to start with identifying business problems before choosing the right tools. Learn from real-life experiences and shift your focus for effective problem-solving. In the past whenever I’ve tried to solve a technical problem, the first thing I would do is [find the right tool for the job](/posts/christmas-with-teamcity). I would then test that tool against the known use cases, share it with others in the organization, and see if excitement warrants further consideration. If we’re spending a lot of money, the vendor will get involved and help push everyone toward a decision to go with the tool. The phase is so exciting, so full of promise… And often completely misguided. Instead of starting with the tool, I’ve learned that I need to start with the business and the problems its leaders face. Once we all agree to the problems and have a clear, shared, measurable view of those problems, we can then determine the right tool for the job. Recently I was in a meeting about [Chef](/posts/intrinsic-motivators-leading-to-chef) with a colleague who is not very interested in adopting Chef for his project. He doesn’t see how Chef fits with his operational goals for next year. He was talking about his main pain points being the greater need for operational visibility into his entire stack of hundreds of nodes. So I asked him, “what if we helped you solve that problem with a greater focus on monitoring?” He paused and said to me “I thought this meeting was about Chef.” Once the meeting and discussion becomes about the tool, you’re no longer having the right conversation. Start with the problems, and find good solutions that fit those problems. Repeat. --- # Putting Developers On Call URL: https://hedge-ops.com/posts/putting-developers-on-call/ Explore the benefits and challenges of putting developers on call in a DevOps culture. Discover how this approach can foster responsibility, improve product design, and enhance collaboration between teams. Recently I read through the short but sweet book [Building a DevOps Culture](http://www.amazon.com/Building-DevOps-Culture-Mandi-Walls-ebook/dp/B00CBM1WFC/ref=sr_1_1?ie=UTF8&qid=1452554943&sr=8-1&keywords=building+devops+culture) by [Mandi Walls](https://twitter.com/lnxchk). In the book Mandi has an idea that I admit to having completely dismissed the moment I read it: > One of the early controversial aspects of what became DevOps was the assertion that Engineering should be doing > on-call rotations. In fact, this idea was presented in a way that made it sound like your developers would want to be > on > call if they were truly dedicated to building the best possible product, because they were the ones responsible for > the > code. My immediate reaction was: “that would never work for us.” Whenever I have that kind of reaction, it’s a red flag. Never use the word never. So I asked myself why, thought about it, and dug a little deeper. The main reason this would be a challenge for us is that for compliance reasons [we keep developers out of production](http://www.sans.edu/research/security-laboratory/article/it-separation-duties). So even if the developers were on call, they would not be able to do much because they couldn’t access production. So that’s a bummer. But wait, do they _have_ to access production to be on call? What if they were on call _with_ someone in operations that had the access. When there was an incident they could hop on to a screen sharing application and troubleshoot things together. In fact, this mirrors what happens when an incident is escalated: you have everyone get on a call and talk through an issue together, with operations having the power and control to make the changes. If our developers were on call, they would accept more responsibility for creating a great product. They would then see the outcome of their good or bad design, and improve on it. Operations would feel like they’re not on an island trying to make something work with no context. The good things that could come out of such an initiative surely outweigh the problems. Now that I’m sold on this being something that could help us, I’m now going to bring it up as something to try. It will be interesting to see if the idea has legs. --- # Who is with you? URL: https://hedge-ops.com/posts/who-is-with-you/ Discover the importance of team alignment in problem-solving. Learn why involving everyone in the solution process leads to better execution and success. There are three elements to every solution: 1. Knowing the problem well enough to know a few solutions and what you think is best 2. Bringing together everyone’s idea of a solution into one strategy 3. Execution It’s so easy to skip step #2. Why would you need that [if you are so smart](/posts/surrounded) and already know the solutions? The problem is in order to _really_ do #3 you have [to have alignment](/posts/alignment). When you get to the end and have the solution, you should look around and see a crowd around you celebrating the accomplishment of _their_ problems with _their_ solution. You are solving this together. If you try to solve alone you will quickly realize that you spent a lot of energy going in the wrong direction. --- # Funding URL: https://hedge-ops.com/posts/funding/ Discover why funding is the litmus test for the viability of an idea in business. Learn how to navigate the funding question and reveal the true priorities of your organization. We have a great idea. [We have alignment](/posts/alignment). Everyone is excited. This is the right change for our organization! But there isn’t funding for it. Wait! Someone says. We can work on it on the side. Maybe we’ll make maybe a little progress on it. No, you won’t. Most of us work in businesses that exists to turn a profit. There are two ways to do this: increase revenue and reduce costs. Those who have a compelling case that revenue will increase or costs will reduce will get the funding to pursue those objectives. It’s simple business. I always use the funding question as a litmus test for viability of an idea. Are we willing to incur licensing and staffing expenses in order to create this outcome? If not then (1) I misread the cost or revenue realities of my organization or (2) I didn’t properly sell them to the stakeholders. You see the true priorities of your business when you ask the funding question. This is why I never move past this question until it’s properly answered. --- # Whose Goals? URL: https://hedge-ops.com/posts/whose-goals/ Explore the power of setting personal goals and the potential pitfalls of not doing so in this insightful blog post. Discover how not having clear goals can lead to inadvertently fulfilling others’ objectives. There have been times when my goals were [clear, written, specific](/posts/planned-thinking). When that happens a whole range of possibilities exist. The power of specific goals always amazes me. Other times I’m coasting with no goals. I’m just making it to the next thing and not really thinking about it. It’s easy to think that I just didn’t have anyone’s goals during those periods. A closer inspection reveals otherwise: I was meeting the goals of my more savvy colleagues that had clear goals themselves. I was meeting the goals of Facebook or Twitter executives, for more page views, more advertising revenue. I was meeting the goals of the Dallas Cowboys organization, for me to be a more engaged and thus more lucrative customer. Goals always exist. My only choice is between having my goals or someone else’s. --- # Surrounded URL: https://hedge-ops.com/posts/surrounded/ Explore the journey from being a know-it-all to a leader in our latest blog post, Surrounded. Discover the importance of surrounding yourself with knowledgeable people for personal growth and organizational success. Early in my career I was rewarded for knowing everything. People noticed when I interrupted, when I had the right answer. My boss took notice when I had the insight that he didn’t have at a critical moment. The feeling of being the one that knows is so rewarding and addictive. It’s a shame that it’s so limiting. As you grow your ability to know and recognize the problems and solutions grows. Code turns into design. Design turns into process. Process turns into strategy. If you have to be the smartest person in the room at all times, you will simply not grow into the later stages. It’s impossible. So find a new addiction: surround yourself with people who have the answers that you used to have. Take notice at the critical moments when others stand up and reward those who fill those critical needs. And enjoy the journey into new territory that you’re there to conquer: the strategy, leadership, and process that it takes for your team and organization to be successful. --- # The Inferior “Right” Way URL: https://hedge-ops.com/posts/the-inferior-right-way/ Explore the pitfalls of blindly following the “right” solution in tech and organizational success. Learn how to align strategies with key stakeholders and find the truly effective tools for your business. I’ve spent much of my career trying to find roadblocks to technical and organizational success. This passion leads me to great tools like [TeamCity](/posts/christmas-with-teamcity) and of [Chef](/posts/intrinsic-motivators-leading-to-chef). My success in leveraging tools for organizational success leads me to be opinionated about what tools we should be using and how we should be doing things in our organization. It is so easily get locked into the _right_ solution that would _solve all of our problems_. Early on that worked just fine for me, but over the years I’ve changed my approach. As I’ve grown with our organization from a newly acquired startup to a mid-sized company to a large multinational, I’ve realized that doing the _right_ thing [without alignment](/posts/alignment) with the key stakeholders is the wrong thing. It’s not enough to read a book and evaluate a tool like Chef to see that it will solve our problems. It’s not even enough to talk developers into using it and seeing its value. One must do serious work to analyze the state of the business, find the pain points that are either preventing revenue or creating unnecessary cost, and then set a strategy for addressing those things. After that, one finds the _right_ way. After that, one finds the _tool_ that they’ll use to solve the problem. Anyone claiming to be the _right_ tool or solution before that analysis happens is likely wasting your time. --- # Alignment URL: https://hedge-ops.com/posts/alignment/ Discover the power of alignment in business strategy. Learn how to unite leadership, middle management, and ground-level employees for effective problem-solving and value creation. Transform your business today. You can meet with the CEO of your company and craft an awesome strategy for taking your business to the next level, but if you do not have the agreement of middle management you have nothing. You can meet with countless people on the ground who know the real problems, but if you can’t quantify it in the way that their leadership can understand, you have nothing. You can read the greatest business book that lays out a superior path forward through industry best practices, but if you can’t join the solutions it promises to problems that are clear within your business, you have nothing. You can craft a strategy with a respected leader in your organization to bring an initiative forward, but if her peers don’t see the value in the initiative, you have nothing. You can even get an entire development organization rallied around a particular initiative, but if the sales organization doesn’t see that as adding value to its goals and the company’s profitability, you have nothing. Value isn’t found in leadership, books, best practices, departmental unity, or from those who do the work. Value is found at the _intersection_ of those things, where everyone sees the problem in the same way and will do their part in solving it. My life has been transformed by acknowledging the immense _power of alignment_. --- # Three Essential Components to Compliance at Velocity in the Enterprise URL: https://hedge-ops.com/posts/three-essential-components-to-compliance-at-velocity-in-the-enteprise/ Discover the three essential components to achieving compliance at velocity in the enterprise: focusing on the workflow, making it real, and empowering security. Learn from a Chef initiative. Security has been the most difficult part of [implementing Chef](/posts/intrinsic-motivators-leading-to-chef) in my large organization. I recently spoke with Chef about this and had a great conversation with [Justin Arbuckle](https://twitter.com/dromologue) related to it. Chef is focusing this year on helping organizations like mine to achieve compliance at velocity. Through the conversation and Justin’s great advice, I realized that every Chef initiative must have these three elements to be successful: ## Focus on the Workflow At first, I was focused on the technology and what talked to what, which commands would be used, and how awesome the outcome would be for our business. From a security perspective, however, this was worthless. Security and compliance are focused on _how we can safely make changes to this system_. This means that you don’t accidentally bring production down by a cookbook change. It also means that you get approvals within a defined process before making _any_ change. For us, this workflow didn’t really take shape until we decided to fully adopt [the Policyfile feature](/posts/policyfiles) and workflow for change management. We then wrote extensive documentation and visio diagrams to explain every element of every step in the journey from a checkin to a production change. It wasn’t until we had this documented and clear that we started making progress with our security team. The lesson we learned was: _the technology is secondary to the workflow_. The workflow is most important. And, for you, if you’re security conscious, and you haven’t looked at Policies yet, you really need to. ## Make it Real Looking back at the last few months of our implementation, we’ve spent way too much time in visio and not enough time creating a real environment in order to demonstrate the changes we’re talking about. I spent quite a lot of time trying to consolidate the Chef ecosystem into something that someone could understand in an hour-long meeting, but that was ineffective. It turns out that: (1) Chef is complicated and hard, that’s why it’s so powerful, and (2) people don’t generally have time to wrap their minds around it like I have. Knowing what I know today, I would have started by creating an environment that demonstrated what I was talking about and then showed every stakeholder the workflow (defined above) applied to a real work situation that I could control. This is what we have done: we migrated YouTrack management to Chef and will demonstrate a secure, repeatable workflow to our security stakeholders with that. This will shift the conversation away from the abstract and into the implemented. It also means there are no unknowns to implementing the solution. ## Empower Security Security people are used to hearing from people, “We want to do this cool thing that will make _our_ lives easier but will make _your_ lives more difficult.” It’s natural for them to approach Chef in the same way. Fortunately, Chef has made some amazing investments lately in features that enable a partnership with security rather than an impediment. The [audit mode features](https://www.chef.io/blog/2015/04/09/chef-audit-mode-cis-benchmarks/) recently released in Chef allow a security team to map the auditor’s implementation of the security compliance into actionable requirements that can then be applied to the system. So, all of a sudden, the crazy devops person who wants to make everything go faster is the person who will enable automated, reported compliance for PCI throughout our data center. The posture of the security group changes from being antagonistic into being a true partnership. We’re planning on taking some PCI requirements and writing audit cookbooks for them. We’ll go into the auditing relationship with demonstrable proof that we are creating a more secure, auditable, and fast system for managing configuration in our hosted environment. ## Conclusion Empathy is probably the most important aspect of any change. Begin with how a change will improve the effectiveness of your colleagues and the ultimate profitability of your company. Security is no different. Thanks to my friends at Chef, I have a more solid strategy for meeting those goals. --- # Discriminatory Wind URL: https://hedge-ops.com/posts/discriminatory-wind/ Explore the metaphor of discriminatory winds in the tech industry, highlighting the challenges faced by women and minorities. Learn how to support and respect their tenacity and courage. The other day I was [riding my bike](/posts/engineering-travel) to work, felt great, and got there in 26 minutes. I usually take about 35 minutes to get there. My wife even noticed with [my automated location texts](/posts/sanitize-your-smartphone-with-republic-wireless) that my phone sent her (Michael is leaving home, Michael is arriving at work). She sent me a text congratulating me on the great accomplishment. I even congratulated myself a bit on how regularly I bike now and how I’ve gotten so much better at this over the last year. At the end of the day, I started home. The wind was screaming in my face, and I could barely go over 10 MPH. On the way home, I saw two bikers going in the opposite direction, cruising along happily. They looked at me with pity, thinking I was out of shape. I wanted to yell out, through the wind, “You don’t realize how good you have it!!!” Almost an hour later I was home. I was tired and beaten. When the wind is at your back, you go faster. You don’t realize it’s the wind that is pushing you; you think it’s you doing it. When the wind is in your face, you are constantly reminded of the challenge, and you constantly have to push. I have met many women in my professional network who have had the wind in their face and pushed through anyway and built successful careers. They meet countless people along the way who think they are better suited for more people-oriented professions like sales. At a recent conference, a lady in technology was checking out a technology at a sales booth. The person at the booth immediately raised his voice to a less threatening and gentle voice and then remarked about how surprised he was to hear that she wasn’t in sales herself. At other times women run into people who will mindlessly delegate the meeting notes, or the administrative tasks to them, because…there is no good reason. Minorities within technology often have the wind in their face and push through. I was horrified to overhear a white man at an event start talking with an inner city accent to an African American man about how he was going to go _all ghetto_ on some technology and make it work, presumably, by not taking digital prisoners. The African American man politely acknowledged the awesomeness of this endeavor and quickly exited the conversation. I thought to myself, how many of these types of conversations does this person have to endure in places like this? If you’re a white heterosexual male in technology, the wind is at your back. You may not notice it, but it is. What to do for the others? My rule is that I will always treat others as people, never as labels, and keep my interest and conversation toward what can make them successful. I never bring up their status; they have enough awkward examples of that without me adding to it. But I also make an effort, in whatever way I can, to support, encourage, and respect those who have exhibited much more tenacity and courage than I have. --- # My Advice For Chef in Large Corporations URL: https://hedge-ops.com/posts/my-advice-for-chef-in-large-corporations/ Explore practical advice for people in large corporations on how to effectively use configuration management. Learn from real-life examples and implement simple strategies to transform your organization’s ability to react to change. Here’s my simple advice about [Chef](/posts/intrinsic-motivators-leading-to-chef) I wish I would have heard a year ago: All the stories about [the unicorns, rainbows, and fairies](http://www.itskeptic.org/content/devops-unicorns-horses-and-mules) that are doing absolutely amazing things with configuration automation are extremely inspirational. [Read about them](/posts/customizing-chef-book-review). Learn about them. Enjoy their talks. Enjoy their hipster vibe. Tell yourself that you are going to be cool like that one day. And then forget everything they are talking about. Because what they are doing is likely too advanced for what you’re trying to do, because you’re not five years or more into your infrastructure automation initiative. Do this instead: Create these four nodes in your Data Center, behind firewalls, with no outside connectivity whatsoever: 1. A Chef Server 2. A Chef Client with the [ChefDK](https://downloads.chef.io/chef-dk/) installed on it 3. A Chef Analytics Server 4. An Artifacts Server (like SFTP server) Does your security team not allow connectivity between Production and UAT (User Acceptance Testing)? Awesome! Build two environments! Does your security team segment audited environments from non-audited environments? Awesome! Build the above four servers in _every segmented environment you have_. You heard that right. Now isn’t the time to get into pissing matches about your _new devops vision of greatness_ that will totally transform…_everything_! No, now is the time to automate the things. Set up your servers and make it happen. If this becomes political, then you are doing it wrong. _But Michael, how am I going to maintain all those environments?_ Well, thankfully you have the joy and pleasure of (1) probably having a bad system in place which is why you are looking at Chef, and (2) Policyfiles. So get over your perfectionism and implement this easy workflow for change management: 1. [Use a policyfile](https://docs.chef.io/config_rb_policyfile.html) for every node in your infrastructure 2. Save changes to policyfiles into Git where each team has their policyfiles in their own git repository separate from their cookbooks 3. Use your CI [to automatically generate](https://docs.chef.io/ctl_chef.html#chef-install) your policyfile.lock.json files and check them into Git. 4. Use your CI to [package each policy into a file](https://docs.chef.io/ctl_chef.html#chef-export) with the [`chef export`](https://docs.chef.io/ctl_chef.html#chef-export) command. This has all cookbooks, policy, everything. 5. [Get your updated policy archives to your Data Center](http://lmgtfy.com/?q=how+to+transfer+a+file+from+one+place+to+another). You should be good at this. You do this already. 6. [Activate your archives](https://docs.chef.io/ctl_chef.html#chef-push-archive) on the Chef Server for the appropriate policy group with the [`chef push-archive`](https://docs.chef.io/ctl_chef.html#chef-push-archive) command It’s as easy as that. Have one or a hundred Chef servers and you have those six steps above. You can save the absolutely mind-blowing automation of step #5 and the simplification of everything later. That’s not the most important thing. Here’s what’s most important: an application team deploys an upgrade with zero outages and zero problems. Then they brag to their leadership about it because it never went this smoothly when they did it the old way. Notice nobody cared about a stupid security argument about what ports are open between environments (there are none in the above proposal) or trying to be [like Etsy](https://codeascraft.com/) or Netflix. People saw the zero outage and zero problems and people said to themselves, “Holy Shit This Is Real”. Multiply the _Holy Shit This Is Real_ moments. That’s what you’re trying to accomplish. Not a dream state. Not what a book said. You’re fundamentally transforming your organization’s ability to react to change, and that capability will be an absolute game changer. So get out of the politics, get out of the arguments, document and implement the simple strategy above, and watch perceptions of what is possible rapidly change. --- # Intrinsic Motivators Leading to Chef URL: https://hedge-ops.com/posts/intrinsic-motivators-leading-to-chef/ Explore how intrinsic motivators such as autonomy, mastery, and purpose can lead to successful implementation of Chef in an organization. Discover the power of these motivators over traditional bonuses. I’m reading about culture in [Lean Enterprise](http://amzn.to/1LfPSL8), and the author makes the point that [bonuses aren’t the most effective means of motivating employees](https://www.youtube.com/watch?v=u6XAPnuFjJc): > While extrinsic motivators such as bonuses are effective in…mechanical work, they actually _reduce_ performance in > the context of knowledge work. People involved in non-routine work are motivated by intrinsic factors summarized by > Dan Pink as: 1: _Autonomy:_ the desire to direct our own lives; > > 2. _Mastery:_ the urge to get better and better at something that matters; > 3. _Purpose:_ the yearning to do what we do in the service of something larger than ourselves. I think this does a really great job of describing what my intrinsic motivators are for [rolling out Chef](/posts/learning-chef-book-review) in our organization. Yes, I’d love to be compensated well for doing what we are doing and would never argue to the contrary. I’ve seen though in the past that money is just money and there are things that matter to me as much or more than money. Daniel Pink really hits the nail on the head about what those are: 1. _Autonomy:_ providing this capability to my company will create more of an ability to direct my own path in the future. The more value I help create, the more I can be in control on how I express that value, and the more freedom I’ll have, within the context of a team, to solve problems that interest me. 2. _Mastery:_ here is something that I can master: how to automate infrastructure configuration management through code using Chef. This is something that can scale quite large, and I have the ability to become one of a few people in the organization that has a full handle on it. That’s exciting to me! I don’t want to be mediocre or have a skill that everyone else views as a commodity. 3. _Purpose:_ this is the biggest intrinsic motivator for what I’m doing. People who do configuration management today have chaotic lives and regularly stay up all hours of the night to perform their duties manually. I get to change that! Our customers don’t yet have the uptime and consistency that they expect and deserve. I get to help change that, and create a game-changing strength for our organization compared to our competitors. I’m excited about this journey that I’m on. The motivators are far more intrinsic than extrinsic. I’ve discovered that is why they are so powerful. --- # Kanban Prioritization with Cost of Delay URL: https://hedge-ops.com/posts/kanban-prioritization-with-cost-of-delay/ Learn how to prioritize your Kanban project using the cost of delay method. Avoid unhealthy competition and out-of-touch strategies by focusing on ROI and lifecycle profit. We have established an [input queue](/posts/defining-the-kanban-input-queue) and defined [the one metric that matters](/posts/the-one-metric-that-matters) for our Kanban project. Our standups are [more focused than ever before](/posts/kanban-standup-meetings-a-way-out-of-standup-hell). Now we need to focus on how to prioritize items that go into our input queue. [Lean Enterprise](http://amzn.to/1LfPSL8) outlines an interesting way of doing this: prioritize items by their cost of delay. On an immature product, you might prioritize in order of who is screaming the loudest. This creates an unhealthy competition among stakeholders to see who can be the most dramatic when asking for a change. Slightly more mature projects might look to the Hippo: the highest paid person’s opinion. This can lead to a strategy that is out of touch with what customers want, because the highest paid person usually talks only to other highly paid people and their direct subordinates. A functional Kanban project looks to return on investment. How much money will we get from this endeavor, and how quickly will we pay off the cost to create the change? This is great, but the problem comes about in software when you have a lot of options on the table that would have a healthy return on investment. _What then?_ This is when the cost of delay calculation comes in handy. From _Lean Enterprise:_ > To use Cost of Delay, we begin by deciding on the metric we are trying to optimize across our value stream. For > organizations engaged in product development, this is typically lifecycle profit…When presented with a decision, > we look at all the pieces of work affected by that decision and calculate how to maximize our One Metric That Matters > given the various options we have. For this, we have to work out, for each piece of work, what happens to our key > metric when we delay that work (hence, _cost of delay_). Tomorrow morning I’m meeting with stakeholders for one of my projects and will prioritize tasks with this in mind. The project is focused on adding quality to our products, so the one metric that matters for this project is the rate of adoption of new features by our customers. With that in mind we’ll have a few options: - we can invest in a large, new feature that we have evidence has prevented higher adoption - we can work on decreasing the defect rate of the automation effort itself, as there are currently more defects being reported than there are being fixed within the automation product itself - we can enhance the automation application by adding an undo feature which will make people more efficient at creating automated scripts These are all important feature requests. I can name individuals who would choose a different answer as to the most important. When evaluated through the cost of delay paradigm, however, things become clear: | Team | Natural Alignment | Natural Misalignment | | ----------- | ------------------------------------------------------------------------------------------------ | --------------------------------------------------- | | Development | Faster Delivery of features | Have to be engaged in operations, more “work” to do | | Operations | Less fires, more consistency | Have to learn a new skillset and be a beginner | | Security | More consistency, compliance | Automation can cause unknown vulnerabilities | | Business | Faster ROI for development, lower cost for operations, and a scale model that works | Takes ongoing investment in culture and tools | Doing this exercise makes it clear to me what our next priority should be. It will be interesting to see if the cost of delay method can be easily understood and adopted by others. We are early in our adoption of Kanban, so we are building this ship as we sail it. I suppose I’ll see soon enough. --- # Four Questions for Product Management URL: https://hedge-ops.com/posts/four-questions-for-product-management/ Explore four essential questions every product manager should ask for effective requirements analysis. Learn how to understand customer needs, define success, and measure it effectively. As product managers, how do we arrive at delighting customers? There are [organizational](/posts/mission-command) and [tactical](/posts/is-continuous-delivery-needed-in-our-organization) lessons that I’ve learned through the [Lean Enterprise](http://amzn.to/1utrIYL) book, like finding the [one metric that matters](/posts/the-one-metric-that-matters). These flow into team-level initiatives that I’ve learned through the [Kanban book](http://amzn.to/1CcuYsg), like [defining an input queue](/posts/defining-the-kanban-input-queue) and [structuring your standups in a better way](/posts/kanban-standup-meetings-a-way-out-of-standup-hell). But I believe the fundamental change agent lies in how a product manager approaches requirements analysis. I believe requirements analysis boils down to four fundamental questions. The maturity of the team is dependent on which of these questions are being asked. This is in order from lower maturity to higher maturity, but they are all essential. ## What do you want to do? It’s essential to really understand what people want and make sure you understand exactly what’s being asked. I’ve been in the situation multiple times when a change was asked, but when I asked probing questions about the exact desired behavior, the request fell apart. Also, it’s important to understand what’s being asked for, so we don’t deliver the wrong thing with the same name. It’s a terrible feeling to declare victory on something when celebrating with the development team to realize that the requester is not equally celebratory of what you just did because it was the wrong thing. This is the foundational question. You have to understand. ## Why do you want to do that? This is the question many people don’t get to. The thought is that a customer knows what they want and why is their business. I’ve learned over the years that this can be dangerous. If we don’t understand _why_ they want what they are asking for, then how can we know we are delivering the right thing? I’ve seen time and time again the scenario I like to call _Declare Victory Yet Nothing Changes_ scenario. This is where you do what was asked for, but the outcome everyone was looking for doesn’t materialize. But it’s easy to declare victory in that situation since you did indeed do what was asked for. Without asking this second question, you’ll be lost in not being able to empathize with your customer. ## How will this make you more successful? This is a difficult question to ask without sounding condescending, because sometimes the answer seems obvious. The crux of this question is: how do you define success? The requester should be able to explain that and in addition tie the request to this definition of success. This makes everything clearer. The customer might say something like, “I want to speed up service to increase revenue,” and want a feature to enable that. That’s where we want to be. We want to have a clear definition of our goals. ## How can we measure success?Í This is the tough part, but it’s a key element I’ve learned from the lean literature I’ve read recently. If you can’t measure something, you can’t manage it. We need to define success and create a measurement that will get us close to that. We then tie our product development strategy to the measurement of success and iteratively create new capabilities that are shown to influence our success metric. This is the essence of the Lean Startup Cycle. I’ve only recently discovered the second two questions, and they have made a world of difference. You are much more efficient in achieving your goals when you have defined shared success and have found a way to measure it. --- # How to Apply Kanban to a Large Project with High Feature Variability URL: https://hedge-ops.com/posts/how-to-apply-kanban-to-a-large-project-with-high-feature-variability/ Learn how to apply Kanban to large projects with high feature variability. Discover techniques to manage throughput and break down tasks into Minimal Marketable Features or Epics. I was introducing some ideas I’ve learned recently about [throughput management](/posts/initial-tracked-metrics-for-kanban-adoption) to a friend of mine who is on a large project. The question came up, how to make throughput a useful metric when there are some very small features that go through the system and others that can take months with a large team. Let’s review a bit: throughput is the measurement of number of items you process through a system over a period of time. So you would say that you did fifteen features and bugs last month, and ten the month before. If you had no control or process in place to deal with the fact that a new customer might want its killer feature, which is months of work for you, then this metric will quickly become meaningless. What to do? There is a simple way to handle this and a complicated way. I suppose the complicated way will be what’s needed for my friend’s project, but the simple way is good enough for my project. ## The Simple Way: Break Things Down into MMFs On a simpler project, you can simply break a large request down into a [Minimal Marketable Feature](http://www.netobjectives.com/minimum-marketable-features-mmfs-explained). The team asks, what is the minimal value we can deliver to the customer in a way that they both understand it and accept it as adding value to them? It’s not something like _the database column is added to the database_ but it’s also not _your killer feature is delivered_. When you break things down into outcomes that the customer cares about, you end up with a lot of smaller issues and variability is much smaller. This is what I would try first. But what if that doesn’t work? What if the customer doesn’t care about your breakdown and just wants the feature? ## The Complicated Way: Break Epics into Stories The more complicated way I got from [the Kanban book](http://amzn.to/1GgXlcU) is to continue to allow the customer to define things _their_ way. Those items, when they are too big to break down, are called Epics and won’t be counted as throughput. The Epics are broken down into stories that are still testable outcomes. In other words, we are still avoiding the _database column is added_ story. Your throughput metric will track the number of stories that are processed through the system. I’ve seen mixed success from teams that are trying to break things down. People tend to want to break things down into layers and not into testable outcomes for end users. Once that skill is mastered, however, the team has a good way to track throughput through the system. ## But what about Story Points? Another way to track throughput is through story points. I don’t like this. I’ve read evidence that when velocity by story points are managed to increase, the team simply begins to increase their story point estimates. We want to make sure we manage elements of the system that cannot be doctored, either subconsciously or consciously. Also, the Kanban book adds to the disdain of story point estimates by bemoaning the waste associated with getting an entire team together for the estimation. So the exercise to measure throughput starts with ensuring that items flowing through the system are of a predictable size. Either way above would work, even for my friend on the large, established project. --- # Initial Tracked Metrics for Kanban Adoption URL: https://hedge-ops.com/posts/initial-tracked-metrics-for-kanban-adoption/ Explore the initial metrics to track for successful Kanban adoption. Learn about cumulative flow, lead time, throughput, and initial quality. Improve project management with these actionable insights. One of the reasons I’ve read through [David Anderson’s](http://www.djaa.com/) [Kanban book](http://amzn.to/1ywImb4) is the need for metrics. I was inspired by [Lean Enterprise](http://amzn.to/1CEMvHL) to [become more metric-driven](/posts/the-one-metric-that-matters) and [make measurement](/posts/measure-for-reality) more of the foundation of my management approach. Anderson did not disappoint. He devotes a whole chapter to which metrics to track on an [initial Kanban initiative](/posts/kanban-decoupling-input-cadence-from-delivery-cadence). ## Cumulative Flow Kanban is focused on limiting work in progress to create flow, so it’s only natural to create a cumulative flow diagram. This should tell us a lot about the nature of flow in the project. Here is one from a recent release from [YouTrack](https://www.jetbrains.com/youtrack/) that I only looked at today: ![Cumulative Flow Diagram in YouTrack](https://ik.imagekit.io/hedgeops/site/article_images/2015-03-16-initial-tracked-metrics-for-kanban-adoption/cumulative.png) You can see that we had a lot in _Ready for QA_ and nothing flowing to _Done_. But then at the end of the release, many of them drop off. Did they go to the next release? Why? Why weren’t they finished? The Developing features were staying constant, so it looks like that isn’t a problem. The analysis is bunched up at the start of the release, but isn’t occurring toward the end of the release. Is this a problem? There are so many actionable ideas that are coming out of viewing this chart. For now on, this chart will be at the forefront of any metrics my teams provide. ## Lead Time In a Kanban system, lead time is important because it is the basis of actions in the system. If the lead time is 20 days, you’re asking your stakeholders the question, “What X items will you want 20 days from now?” Also, when determining whether something needs to be rushed through a system, we _have_ to know lead time. In other words, if a stakeholder needs something in 30 days, is it possible? Without a lead time metric, that’s not known. Anderson stresses the importance of lead time distribution over mean lead time, because it will help the team understand the certainty with which commitments can be made. I wasn’t able to generate lead time from YouTrack, but I created a mock one in Excel fairly easily: ![Lead Time Distribution](https://ik.imagekit.io/hedgeops/site/article_images/2015-03-16-initial-tracked-metrics-for-kanban-adoption/lead-time-distribution.png) This tells us that we can easily promise a lead time of 10 days on the project. Many items will go much faster than that. ## Throughput Throughput is the measurement of how many items go through the system over a fixed period of time, usually months. Ideally throughput should be high. Here is the throughput on one of my projects from the last part of 2014: ![Throughput](https://ik.imagekit.io/hedgeops/site/article_images/2015-03-16-initial-tracked-metrics-for-kanban-adoption/throughput.png) This tells us that there was a huge spike of productivity in October. Why was that? To be honest, it was because there was an important deadline to meet and I worked overtime on the project to get it done. Another observation from this data is that throughput is by no means consistent. Why is this? It can probably be seen in the cumulative flow diagram above if we view it for all months. Continuous improvement goals should be to increase throughput. ## Initial Quality We want to make sure we don’t motivate the team to speed up without regard for quality issues they are creating. So we need quality to be one of our core metrics. Anderson talks about the metric being defects per feature, but I disagree. I want to just track straight defects that are found by customers: ![Defects Found](https://ik.imagekit.io/hedgeops/site/article_images/2015-03-16-initial-tracked-metrics-for-kanban-adoption/defects-found.png) Wow, October was busy! This makes me wonder whether all that productivity was worth it. November and December trended downward (but there was vacation in those months as well). A good metric is one that initiates action, and, as you can see here, these metrics are a great start to seeing the health of a project and ideas for improvement. They will be the basis of project management going forward. --- # Kanban Decoupling Input Cadence from Delivery Cadence URL: https://hedge-ops.com/posts/kanban-decoupling-input-cadence-from-delivery-cadence/ Explore how Kanban decouples input cadence from delivery cadence in software development. Understand how this approach can improve team focus, streamline work processes, and enhance product management strategies. For my entire career, I have approached software development project planning at the level of the release. In waterfall, you plan a six-month release, the first phase of which is to design and estimate the requested features to determine how much can go into the release. You are supposed to plan the whole thing. In Scrum, you plan a three-week release up front. The cadence is shorter, but the process very similar. [David Anderson’s](http://www.djaa.com/) [Kanban book](http://amzn.to/1yaDiHw) provides another approach that separates [the input process](/posts/defining-the-kanban-input-queue) from the output process. In Kanban, the input queue is largely there to serve the development machine. You want to have as many items in the queue as are needed to not have anyone waiting for new work. On a typical small team, that is probably five items. A product manager would manage this queue by creating a regular meeting for all stakeholders to collectively decide what needs to happen next. Anderson recommends that the meeting happen once a week. If the product is processing work at an agile pace, this should be enough to refill the most important 2–3 items. It’s easy to think that this means that there should be weekly releases. When you step back and think about it, the release cycle has its own set of constraints. Remember that the input queue’s function is to provide the development team with the most valuable items to do next. The function of the delivery is to provide that value to the customers while minimizing delivery costs. Let’s say delivery of the software means that thousands of people need to be trained, materials need to be printed, and a marketing program needs to kick off. In this situation, the fixed costs of the delivery are high, and thus it is desirable that they not happen as frequently. Can you imagine doing such a release every two weeks? That would be insane! On the other hand, delivery of the software might be very cheap because of tools [like Chef](/posts/learning-chef-book-review) and training is built into the product. In this case, it makes sense to release more often. Perhaps a daily release would be a great idea for this type of team. A part of the lean movement focuses on taking a situation like the former one and turning it into the latter one. Lowered fixed costs of release means that value can flow more freely to customers, and ROI happens quicker. But that’s a strategic choice. At the beginning, you get the release cycle you get, and continuously improve to a better one. But because we have decoupled the input side of the equation, we get a team that is focused, flowing the highest priority work quickly through the system. I think that this will be a game-changer for how my teams do product management going forward. --- # Learning from Ebola Healthcare Workers with Enterprise Problem Solving URL: https://hedge-ops.com/posts/learning-from-ebola-healthcare-workers-with-enterprise-problem-solving/ Explore how enterprise problem solving can learn from Ebola healthcare workers. Discover how hypotheses, testing, and measuring results can lead to meaningful change in large organizations. In a large enterprise it can be difficult to implement large meaningful change. On many days I have ended up frustrated while sitting down to a margarita during [one of my quarterly retrospectives](/posts/measure-for-reality). How do I get through all the opinions and politics to create real, lasting change? After reading about [the Lean Startup Cycle](/posts/the-lean-startup-cycle), I have a new way of thinking about it, which starts with healthcare workers in West Africa [fighting Ebola](http://www.economist.com/news/international/21625813-ebola-epidemic-west-africa-poses-catastrophic-threat-region-and-could-yet). When these brave individuals arrive to risk their lives and help others, they are met with a striking contrast to the first world. As you have probably learned, there are entire tribes of people in West Africa who celebrate the recently deceased within an elaborate ceremony where the entire tribe drinks after the deceased loved one in a shared cup. Science certainly had nothing to do with it, but science does tell us that when the deceased person has Ebola, this is a surefire way of getting the whole village infected. Couple this with the cultural norm that those who are sick should travel large distances to medicine men who will heal them, and you have an epidemic. So, faced with such a terrible situation of men, women, and children dying every day due to a horrible disease, what do I imagine is the reaction of these healthcare workers? Do they pound their fists and whine, “We could change this situation if these people weren’t being so _stupid_?” Do they clock in and out, thinking that the problem is just too large and that they will just collect a paycheck so that they can support their family? In other words, is their primary approach to the situation that of _frustration_? I think they certainly feel frustration. However, these people are professional scientists, and they probably follow [a much different formula](http://en.wikipedia.org/wiki/Scientific_method): First, they start with a hypothesis. They ask themselves, maybe if we visit the tribal leaders we’ll be able to convince them that this practice is dangerous and that will stop the spread of this disease. If the leaders cannot be convinced, then they may say that the tribes are a lost cause and that maybe they should lock down the entrances to the cities. I’m sure there are hundreds of ideas that they come up, all without despair, but with a genuine hope that something can be done to improve the situation. Next, they test the most promising hypothesis. The team may decide to educate the tribal leaders. They then create a short-term way to test their hypothesis. They decide to go to one village and talk to the tribal leaders then pay an informant to record the activities of the next funeral, which is likely to happen in the coming days. They try this with ten villages. Finally, they measure the results of the test. Out of the ten villages, only two changed their practices. While many of us might view this as a failure, the scientists see this as success. We now know what _wouldn’t_ be helpful. Now we can come up with a better hypothesis and start the process all over again. This method is what got us technology, medicine, and progress. Why aren’t we using it in the enterprise? --- # Kanban Standup Meetings: A Way Out of Standup Hell? URL: https://hedge-ops.com/posts/kanban-standup-meetings-a-way-out-of-standup-hell/ Escape the chaos of daily standup meetings with the Kanban method. Learn how to facilitate effective communication and collective ownership in Agile projects. Turn your standup meetings into a productive group exercise, not a task reporting session. In every Agile project, you’re supposed to have a daily standup meeting to facilitate communication and collective ownership. Intentions are always great at the beginning, but for me, they have always descended into a tolerable mess. Can the [Kanban method](/posts/defining-the-kanban-input-queue) teach us anything about how to do them better? [If you’re following the Scrum process](http://www.mountaingoatsoftware.com/agile/scrum/daily-scrum), the meeting should last 10–15 minutes and everyone should go around the room talking about what they accomplished yesterday, what they plan on doing today, and what, if anything, is blocking them. Every software development methodology I have read tells you to do them; I’ve even seen people have success with them on waterfall projects. Everyone is excited about doing the standups correctly, and then someone gets tired and [everyone sits down](http://www.batimes.com/articles/seven-common-mistakes-with-the-daily-stand-up-meeting.html). Eventually what tends to happen on my team is that everyone reports to _me_, the leader, what they intend on doing today and I give them about 10–30 seconds of comment to help them along. While I’m engaging with each person, the others are thinking about other things. There is no collective ownership. This is by no means a functional standup meeting. What to do about this? [David Anderson](http://www.djaa.com/) has some suggestions in his [Kanban book](http://amzn.to/1yaebV5): > The need to go around the room and ask the three questions is obviated by the card wall. The wall contains all the > information about who is working on what. Attendees who come regularly can see what has changed since yesterday and > whether something is blocked or is not visually evident. So standups take a different format with a Kanban system. > The focus is on the flow of work. I’ve done these types before, and it is very effective. The question is now, _What needs to happen today to move things forward?_ and everyone participates. This becomes an obvious group exercise, not a task reporting meeting to management. So in the new standup, you start with the board on the right side and talk about every card. The team collectively identifies the actions taken that day to move it forward. Items that are blocked are highlighted and the team plans for a course of action. While it is driven by a leader, the person driving it can change and everyone is engaged. Also, this way of doing standups scales well. Anderson writes: > Daniel Vacanti ran a successful standup with more than 50 people at a project at > Corbis in 2007 where, despite the large size of the team, the meeting was > completed in around 10 minutes every morning. A ten-minute meeting with fifty people; that’s amazing! I’m looking forward to getting my standups out of standup hell. --- # Defining the Kanban Input Queue URL: https://hedge-ops.com/posts/defining-the-kanban-input-queue/ Explore the concept of the Kanban input queue and how it can revolutionize your backlog management and prioritization. Learn from David Anderson’s innovative approach at Corbis and how it can be applied to your projects. I have been reading [David Anderson’s](http://www.djaa.com/) wonderful book on [Kanban](http://amzn.to/14OSLBa) this week as a means to get more specific on the project improvements I want to make based [on what I’m learning](/posts/the-one-metric-that-matters) with [Lean Enterprise](http://amzn.to/1y9Xjhh). This book has disrupted up my approach to backlog management and prioritization. Within a Scrum or Waterfall process, whenever a customer asks for a request, you put it on a list and regularly prioritize that list. The backlog as a whole is the input queue in the system. Currently, there are 397 issues on our backlog. We can’t possibly be meaningfully prioritizing all of these. In a Kanban system, this is seen as waste. Why spend all this time prioritizing something when only the top five things at any one time are important? Is there a way to communicate to users that we just won’t get around to certain things? At [Corbis](http://www.corbisimages.com/), Anderson tried something different: he figured out how many items that were needed in the input queue to keep the system going. In other words, we don’t want to be caught not knowing what to do next, so what number of items in the input queue would keep that from ever happening? Usually the number is less than five. Every week the team meets with the stakeholders and asks the simple question, “What are the most important X things to do next?” These items can be pulled off of the backlog, or they could even be new. The stakeholders can discuss what the most important changes are and why. The important items are determined and then the changes flow through the system. Now that this discussion is happening regularly, the territorial fighting should decrease. It’s up to those in the meeting to come to an agreement on what is next. If your thing isn’t done this week, then perhaps it will be done next week. Nothing is set in stone. After a few months of this, it should become apparent that some items on the backlog have very little chance of getting done. Therefore, if a backlog item is more than six months old, we should close it. We can always reopen it if is a priority, but it keeps open communication with those requesting changes about whether to expect the change anytime soon. Yesterday in a project meeting one of our senior developers recommended that we focus more on ensuring buy-in from teams that we are serving for what we are doing. At the time I was focused on how to define appropriate metrics and so didn’t know how to implement her point. But now I see that if I follow this pattern of input queue management, I’ll be able to bring together stakeholder’s desire to have something _right now_ and their ability to ensure that no other teams are blocking us from creating that outcome. I’m really excited to see how this suggestion will work for us. --- # Is Continuous Delivery Needed in Our Organization? URL: https://hedge-ops.com/posts/is-continuous-delivery-needed-in-our-organization/ Explore the need for Continuous Delivery in your organization. Learn how it can help fix defects quickly and deliver features faster, building trust with your customers. Continuous Delivery sounds wonderful when [you’re at a conference](http://www.infoq.com/interviews/jez-humble-lean-enterprise). You hear about companies like [Netflix](http://www.infoq.com/presentations/netflix-continuous-delivery) that deploy to production many times per day. When [learning Chef](/posts/learning-chef-book-review), people often ask me if we really need something that will enable us to deploy that often. Some of them are on projects that take many months to deliver, and the customer would have it no other way. I answer this problem by splitting it up into two questions: How quickly does a customer want a Severity 1 defect fixed in production? I’d say the answer to this is usually, regardless of the tooling used, within a few hours. If there is a critical defect affecting operations, no one is talking about how we’ll have that delivered in a few months. People are on phones, developers are doing what it takes to get done, and something happens. So I’d say in this situation it’s a great investment to automate your delivery so the emergency situation is as tested as the non-emergency situation. How quickly does a customer want a feature in production? This is a trickier question. We can separate the answer into what the customer _wants_ and what the customer _expects_. The customer _wants_ to have the feature in production right now. Otherwise, they wouldn’t have told you about it. I have never heard a user make a request for change in software and say, “I’m just letting you know, I’d rather have it six months from now.” Now is always better. However, our customers have a business to run, so they’re not going to be foolish with updates. They want us to [fully test and properly deliver the software](/posts/safety-net). So I believe their answer to this question would be: as quickly as you can safely get it to me. This is a flexible arrangement based on the trust we create from automating our delivery and testing process. The better job we do, the more they trust us and the quicker they get their software. It will probably never be _today_ that they get updates, but also if we’re taking this seriously it also shouldn’t be a long time. So even for us, with real customers that are paying us to get it right, there is room for continuous delivery and Chef. --- # The One Metric that Matters URL: https://hedge-ops.com/posts/the-one-metric-that-matters/ Discover the power of the One Metric That Matters (OMTM) in driving success and innovation in your career or business. Learn how to identify, implement, and measure your OMTM for maximum impact. The more I measure [the more successful I am](/posts/measure-for-reality). I’ve known this for a while, but I realize that the lack of measurement is still the thing that is holding my career back. I’ve already written about how measurement is key to [The Lean Startup Cycle](/posts/the-lean-startup-cycle) of using the scientific method to find innovation in your organization. So I’m hooked with this idea, but I desperately want to implement it in a good way. I want to have a breakthrough. The _Lean Enterprise_ book cites the _Lean Analytics_ book concept of _The One Metric That Matters (OMTM)_: > OMTM is a single metric that we prioritize as the most important to drive decisions depending on the stage of our product lifecycle and our business model…;We focus on One Metric that Matters to: > > - Answer the most pressing question we have by linking it to the assumptions in the hypothesis we want to test > - Create focus, conversation, and thought to identify problems and simulate improvement > - Provide transparency and a shared understanding across the team and wider organization > - Support a culture of experimentation by basing it on rates or ratios, not averages or totals, relevant to our historical data set When I started out my current project in 2008, my boss for a very short time was [Jeff Hughes](http://www.linkedin.com/pub/jeff-hughes/3/720/3a3), who is a genius at innovating software. The project I was doing was related to software quality, so Jeff gave me a metric for the first year: make defect containment go up, as defined by the percentage of software defects found inside the company related to the software defects found by our customers. He gave me the one metric that matters, and with that direction I was able to take the project in the direction it needed to go. At first, we thought that we were going to stick with testing just customer situations, but we ended up having a mixture of customer situations and vanilla, or regression, situations. Without that _one metric that matters_, I wouldn’t have had the freedom to do that. Fast-forward to last year, I started a project to improve the installation experience for all of our products that get installed in restaurants. Everyone seemed to want this to be better, so I didn’t stop and create the one metric that matters. I rushed ahead and started on the solution. My one metric that mattered internally was lowered cycle time from a release to working software. But I didn’t bother to define where we were before I started, and how my changes had improved the situation. This led to a lot of unnecessary drama. People would question _why_ we were taking a particular approach. People would question whether an improvement was warranted in the first place (even after agreeing that it was warranted before). It’s easy to point fingers and talk about how people don’t understand or care, but they do understand and they do care. I just didn’t take the time to create a metric that would measure what outcome I’m trying to create. This is not a lesson [I will soon forget](/posts/failure-the-catalyst). I hope that I will be able to fundamentally transform my approach to include measurement as one of the key aspects. --- # Progression of Responsibility URL: https://hedge-ops.com/posts/progression-of-responsibility/ Explore the progression of responsibility in personal and professional growth. Learn how mastering lower-level decisions can lead to strategic success. Read more on our blog. While reading _[Lean Enterprise](http://amzn.to/1zxdulv)_, I’m coming up with [a lot of great ideas](/posts/mission-command) and [improvements for my organization](/posts/the-lean-startup-cycle). Much of the book so far has been about how to properly execute portfolio management within an enterprise to make sure that (1) you maximize ROI, and (2) you don’t manage your existing proven products with an investment horizon of this year or this quarter the same way that you would manage innovation products that have a longer investment horizon. It’s a fascinating read. Something occurred to me though. In order to gain entry into the higher level strategic decisions, one must first master the lower level ones. Here’s a progression I can see, that I wish I knew about ten years ago: 1. _Personal Effectiveness:_ do you take on work in an efficient, organized way? Do you have a system for [getting things done](/posts/getting-things-done-action-plan), a system for communicating with others, for follow-up, for [what to do when things get difficult](/posts/failure-the-catalyst)? Do you know [when you’re over committing](/posts/two-questions-about-commitments)? 2. _Planning Effectiveness:_ do you understand why we are doing what we are doing on this project? Do you measure things in terms of how they will profit your company or on some other, less meaningful way? Do you have a way to create value for your stakeholders quickly by strategically prioritizing certain things over others? Do you know what the short-term and long-term risks to the product and what can be done about them? 3. _Management Effectiveness:_ do you know [how to measure success](/posts/measure-for-reality)? Have you linked the measurement to profitability of the company? Are you able to iteratively create improvements to measurements in a fluid, changing situation? Are people motivated by your leadership and understand the parameters that you’ve set out that measures your team’s success? 4. _Executive Effectiveness:_ are you able to create profitability for the various products in your portfolio in a way that respects their lifecycle? Are your established, older products increasing in quality and customer satisfaction? Have your newer innovative products succeeded in creating new markets, as measured by appropriate metrics? Has your team leadership received the appropriate amount of direction (not too detailed to constrain innovation and not too broad to create chaos)? Have you had innovation experiments that have failed in the past three months, that had minimal investment yet provided a valuable lesson for the leadership team? Right now I’m at #3. It’s fun reading about #4 in the book, but I do realize the trap of trying to live in a place that is beyond your current capability: If you live in a later stage than you’re at right now, you end up ignoring the key elements that are needed to progress to the next stage. So I can start thinking a lot about portfolio management or how we would manage innovative products differently than our established products. But if I get too caught up in that, I avoid the reality for me right now: I need to find a measurement of success that leads to profitability of my company for every project that I’m doing, be able to communicate that to others, and manage its improvement with a team. That will be my focus over the next year or so. If someone comes to me about how we need to do a better job planning yet don’t have an established workflow for how they manage their own work, I quickly get them focused on that. If someone wants to be a manager of a group but can’t see what that group is doing strategically and the trade-offs that exist in the group, I get them focused on that. You have to master your current responsibilities before you can progress to others. --- # The Phoenix Project Book Review URL: https://hedge-ops.com/posts/the-phoenix-project-book-review/ Explore my insights on the transformative power of ‘The Phoenix Project’ in my latest book review. Discover how this book can change your perspective on devops and your career. As I’ve looked into devops more and more over the past few months, the book [The Phoenix Project](http://amzn.to/1AinIdB) has come up over and over. I finally bought it when Matt Stratton at Chef basically insisted on it in [his very awesome reading list](http://www.mattstratton.com/tech/devops) to ramp yourself up on devops. I haven’t been into fiction very much, but over the summer I read [A Man in Full](http://amzn.to/1zx9aT7) as a means of integrating stoicism into my own philosophy. That book lit up my imagination and helped me absorb the stoic themes in a way that would be difficult had I just read an outline of stoicism. I was awakened to the reality of how fiction can transform your mind deeply by awakening all aspects of the mind during learning. So I was very excited to read the book. And the book did not disappoint in changing my outlook on my own career and what is possible for those around me. It taught me a few basic lessons that I believe will transform my behavior in the future: - _Have a respect for the system._ Up until I read this book I treated an inefficient system like it was garbage. Let’s get rid of the inefficiency! This is dumb! What I realized by reading the book is that in order for you to effectively and profitably change a system, you must have a respect and understanding of why it is the way it is. If you don’t know why it is this way, you can realize that it is an inefficient system, but you will not effectively change it. - _[Measure, Measure, Measure](/posts/measure-for-reality)._ The main character in the book has a great respect for measurement. People say you can’t manage what you don’t measure. I think that can be taken too far, but there is a reality in it: if I can’t measure the reality of the system, it will be very difficult for me to (1) convince people that a more efficient change is needed, and (2) know that the changes I am making are having their desired effect. - _Take a Breath and Count to 2._ One of the nice things about the book that taught me a lesson is when the main character interacts with various antagonists, who are obviously being reckless, dumb, and incendiary, the main character will take a breath and count to 2. Then respond. And when he responds, it’s with facts and an attitude of doing what’s best for everyone. I desperately want to exhibit this kind of tact and patience. I get so passionate about my ideas that I can forget to have patience, be calm, and move the ball forward. - _Find the Bottleneck Constraint._ In the book there is a legendary character named Brent. Brent can do everything. He can fix problems in seconds that everyone has spent days trying to understand. He knows why this server is the way it is, and the answer lies in activities from 2002. Brent has everyone asking him for everything and the business is at its knees due to Brent’s inability to clone himself into fifty other people. In the book, the main focus was on getting Brent isolated and his work properly documented, prioritized, and managed. Brent remained a hugely valuable member of the team, but they couldn’t grow until his workload was under control. And once Brent had his priorities under control, he was able to do some special things for the company. I really enjoyed the Phoenix Project and recommend it to anyone wanting to lead change in their organization using lean principles. It reaches the reader in a way that a nonfiction book can’t: you can feel the tension. You drop the F-bombs right there with the reader. You feel the desperation as the core concepts come to life. And therefore when you face similar situations, you have a whole new world of awesome manufacturing theory available to you. This book was one of the best software-related books I’ve ever read. If you want to be a leader, please get a copy and read it. Then invite me to lunch and let’s talk about it; maybe we can change the world together. --- # My Son’s Choice Between Negativity and Taking Action URL: https://hedge-ops.com/posts/my-sons-choice-between-negativity-and-taking-action/ Explore how my son and I navigate the choice between negativity and taking action. Learn about our unique ability to see problems others don’t and how we use it to lead. My son has felt negative about everything lately which has created for him a vicious cycle of disdain and despair. [He is a lot like me](/posts/embrace-difficulty), so whenever we learn a lesson about him, it usually has something to do with [how I’m wired as well](/posts/failure-the-catalyst). He and I sat in bed a few nights ago and I walked him through this very basic method of how leaders have a choice to make: First, I talked about how he and I are gifted with the ability to think deeply about things, and thus we see things that other people don’t see. We have the responsibility of being able to see problems that other people don’t see. Something that bothers him is how his cub scout pack meetings spend too much time talking to the parents and that leaves him feeling like the meeting isn’t his. When I asked one of his good friends about cub scouts, his friend said, “It’s good.” I’m cursed as well about seeing problems everywhere. I talked to a police officer one time that told me that every room he goes into, even if it’s inside a church, he automatically tries to ascertain who in that room is a potential threat. That man has a similar curse: he sees problems that other people don’t see. I then asked Samuel why he thought President Obama wanted to be President. “Because he wanted to be famous.” I told him how there are better ways of being famous. The truth is Mr. Obama became President Obama because he saw problems that others didn’t see and cared enough about them to get in public life. A leader always starts with a perception of something wrong in the world. If nothing was wrong, there would be no leaders. So it’s a given that a potential leader sees problems. It is our curse. The question now, is: what are we going to do about it? There are two choices: A potential leader _could_ choose the path of negativity, disdain, and ultimately despair. This is the path of the critic, the internet troll, or the quitter. This is the path my son was on and one I’ve been on many times. I can see what’s wrong with the situation, and my response is to continue to criticize or believe that nothing will ever change and quit. It’s even easier to do this when _I’m the only one who sees the problems!_ Well of course you’re the only one who sees the problems; perhaps that’s because you’re the one who is supposed to do something about it! Or a potential leader _could_ choose the path of taking action, improvement, patience, and love. In other words a potential leader could choose the path of leadership. This is the path I encouraged Samuel to take. The question isn’t what problems do you see with cub scout pack meetings; the question is what are you going to do about those problems? When you see a problem with a group of people, the loving thing you can do is, within your power, help solve that problem with them! It’s not to despair and quit. Sometimes problems are so deep and impenetrable that there is nothing you can do about them. At that point the best thing to do _is_ quit. However, most times the problems are there and as a leader we can work to make them better. That’s the path of leadership, the path of people who are cursed with the ability to recognize problems that others don’t see and choose to take action to solve those problems. --- # Programming Ruby (Pickaxe) Book Review URL: https://hedge-ops.com/posts/programming-ruby-pickaxe-book-review/ Discover how the Programming Ruby (Pickaxe) book can help you master Ruby programming. This comprehensive guide demystifies Ruby, making it an essential tool in your coding arsenal. Perfect for beginners and experts alike. When I started [learning Chef](/posts/learning-chef-book-review) in earnest I realized quickly that my need to know what was happening was leading me to need to dive into a book on Ruby and figure out what all the magic I was seeing in Chef was really about. Chef has an amazing way of being usable for those who don’t know much Ruby, but I’m the curious type that just needs to know. I started out with [The Ruby Programming Language](http://amzn.to/13QZz1v) but found it to be too much of a reference work that basically stated facts about the language instead of walking the reader through the learning process. I was delighted to find [Programming Ruby](http://ruby-doc.com/docs/ProgrammingRuby/) to be exactly that. I was able to get through the book in a few days. It starts you out with objects, which for Ruby is the right place to start out. As I’m teaching my son how to program, the concept of objects is very easy for him to pick up. You don’t need to start with primitives, then control flow, then objects for people to learn. Learning is less logically structured than that. People don’t think like computers. Each chapter in the book is about fifteen to twenty minutes of time investment and walks you through an example that you can easily code on your own. I find that when learning these things, I can’t just read it and know it. I need to _do_ something as well. This book did a great job at keeping me engaged with my ruby interpreter as well as with my mind. In November, I set a goal to be working with Ruby every day by February. This book did a great job at making the goal possible. It demystified how Chef was doing its magic, but it has done so much more. It has opened me up to a whole new world of possibilities by quickly being able to script a solution to a problem without having to go through all the hoops of a statically typed programming language. While I still love C# and will use it for certain problems, Ruby is not a part of my life, thanks partly to this excellent book. --- # The Lean Startup Cycle URL: https://hedge-ops.com/posts/the-lean-startup-cycle/ Explore the Lean Startup Cycle, a method that encourages innovation through early failure and adaptation. Learn how this approach, used in the creation of the ARM processor, can revolutionize your project management. When [Herman Hauser](http://en.wikipedia.org/wiki/Hermann_Hauser) created a team to create the [ARM processor](http://en.wikipedia.org/wiki/ARM_architecture), now the processor that runs most of the mobile devices you know and love, [he remarked](http://www.pcpro.co.uk/features/358750/whatever-happened-to-hermann-hauser)\*: > When we decided to do a microprocessor, in hindsight, I think I made two great decisions. I trusted the team and gave > them two things that Intel and Motorola had never given their people: the first was no money and the second was no > people. They had to keep it simple. When you start a project the normal course of action is to try to get as many people as possible, so you will be able to have enough resources to accomplish the goal. In [Lean Enterprise](http://amzn.to/1HdjuUt), the authors advocate another way, what they call [the Lean Startup Cycle](http://en.wikipedia.org/wiki/Lean_startup): 1. Working out what we need to _learn_ by creating a value hypothesis 2. Decide what to _measure_ in order to test that hypothesis 3. Design an experiment, called the _minimum viable product_ 4. _Build_ the minimum viable product to gather the necessary data from real customers to determine if we have a good product/market fit. The authors continue: > The trick is to invest a minimum of work to go through this cycle. Since we are operating in conditions of extreme > uncertainty, we expect that our value hypothesis will be incorrect. At this point we _pivot_, coming up with a new > value > hypothesis based on what we have learned, and go through this process again. I love this idea. Instead of wasting years of time and money on an idea, let’s realize that we don’t know everything and use the scientific method to get us to the right idea. Let’s not be afraid of failure; you can’t have innovation without failure. The key to innovation isn’t avoiding failure; the key is reacting to failure earlier than the competition. This method also calls into question the annual budget cycle. If all you have is once chance a year to respond to reality, you’ll end up clamoring for more people and avoid the simple, early, scientific method laid out above. A better way is to start small, provide some runway to solve a problem, and scale only when the solution shows its value in the real world. - I got this quote from Lean Enterprise as well, but cite the original source as they did. --- # Mission Command URL: https://hedge-ops.com/posts/mission-command/ Explore the concept of Mission Command from the book Lean Enterprise, a military strategy that promotes autonomy and quick decision-making, and its relevance today. In the past whenever I found myself micromanaged, I complained that I’m not in the military, and I should have freedom to operate in my best judgement to solve the problem. I viewed the military as a command and control environment where orders were specifically given and followed to the T. I then reasoned that this is not how successful organizations operate. The excellent book _[Lean Enterprise](http://amzn.to/1zGXBeP)_ has debunked this myth with a concept of Mission Command that I’d like to share with you. From the book\*: > In reality, command and control has not been fashionable in military circles since > > 1806 when the Prussian Army, a classic plan-driven organization, was decisively defeated by Napoleon’s decentralized, highly motivated forces. Sounds a lot like a startup vs. a large company. Except this is two hundred years ago. The authors continue: > Napoleon used a style of war known as _maneuver warfare_ to defeat, larger, better-trained armies. In maneuver > warfare, the goal is to minimize the need for actual fighting by disrupting your enemy’s ability to act cohesively > through the use of shock and surprise. A key element in maneuver warfare is being able to learn, make decisions, and > act faster than your enemy. Once the Prussians were defeated, they studied what went wrong and how they needed to innovate their military strategy in order to regain dominance. Their thought leadership came up with the concept of _Auftragstaktick_, or Mission Command: > In 1869, Helmuth von Moltke issued a directive titled “Guidance for Large Unit > Commanders” which sets out how to lead a large organization under conditions of uncertainty. In this document, von > Moltke notes that “in war, circumstances change very rapidly, and this is rare indeed for directions which cover a > long period of time in a lot of detail to be fully carried out.” He thus recommends “not commanding more than is > strictly necessary, nor planning beyond the circumstances you can foresee.” Instead, he has this advice: “the higher > the level of command, the shorter and more general the orders should be. The next level down should add whatever > further specification it feels _to be necessary_ and the details of the execution are left to verbal instructions or > perhaps a word of command.” This ensures that everyone retains the freedom of movement and decision within the bounds > of their authority “the rule to follow is that an order should contain all, but also only, what subordinates cannot > determine for themselves to achieve a particular purpose.” \[emphasis mine\] So with lives on the line leaders implemented a sophisticated framework for implementing strategy while preserving freedom. This happened 150 years ago. It’s still difficult to implement today, though. We either overdo it with too much control because _the leaders know best, that’s why they’re leaders_. Or we give everyone autonomy and everyone ends up going in different directions. These kinds of ideas are what I love about the _Lean Enterprise_ book in particular and the lean movement in general. Software seems so new to everyone, that you can get caught up in solving problems in new ways and giving crazy names to them, like scrum, agile, TDD, etc. But the lean movement says, “Hey, we’ve had a few hundred years of solving problems with technology and enlightened thinking. You’ll probably find a lot of answers from those who have gone before you.” This concept of Mission Control is a great example of that. - the book also credits The [Art of Action](http://amzn.to/1Hdelfn) for developing these ideas. I haven’t yet read this book, but it’s on my short list. --- # Customizing Chef Book Review URL: https://hedge-ops.com/posts/customizing-chef-book-review/ Explore our in-depth review of Customizing Chef by Jon Cowie. Discover how this book can help you understand Chef’s extensibility and tackle complex organizational challenges. Perfect for those with basic Chef knowledge. When I was stuck trying to understand simple concepts about Chef, I bought two books: _[Learning Chef](http://amzn.to/1wHMEZb)_ ([read the review](/posts/learning-chef-book-review)) and _[Customizing Chef](http://amzn.to/1Ajtt8G)_ by [Jon Cowie](http://jonliv.es/). _Learning Chef_ gave me the basic concepts, but _Customizing Chef_ gave me the deep understanding I needed to evaluate the tool for my large, complicated organization. In the closed-source Microsoft world, you figure out what the thing can do and just accept it. The book opened my eyes that Chef allows me to use a skill (reading code) that I’ve built up for over ten years. This leads to a much deeper understanding of how it works than just _trust us this feature does X_. That’s not the best thing about the book though. The thing I appreciated the most was the ability to learn from the author who implemented a world-class deployment solution for [Etsy](https://codeascraft.com/) using Chef. The examples he provided were real world. This wasn’t a textbook exposition on Chef. You could tell that this was the real deal. Learning from him in this way reinforced the idea that Chef was something I could implement in my complicated organization and that with every problem that arises I have options because of Chef’s extensibility. The author did a wonderful job at explaining that extensibility with examples at every level. I learned how to customize Chef’s notification customizations, cookbooks, and even knife itself. Every explanation of a customization didn’t merely explain it; it started with the code and took the reader through a series of examples to build up understanding of the customization. _Customizing Chef_ isn’t for someone who is just starting Chef. For that I would recommend the tutorials or the book _Learning Chef_. For someone who is tasked with implementing it in a complicated organization or someone who has been using it for months and has come across some scaling challenges, this book is a life-saver. --- # Coding for Kids URL: https://hedge-ops.com/posts/coding-for-kids/ Explore the importance of coding for kids in this blog post, highlighting how programming is a valuable tool for the future. Discover myths about programming and how it can be integrated into everyday learning. As a part of the [hour of code](http://hourofcode.com/us) initiative, I was pleased to present my perspective on computer programming to first and third-graders at [Grapevine Elementary School](http://www.gcisd-k12.org/Domain/1675) in December. Through the presentation I was able to inspire the students to see programming in their future. I’ve been frustrated over the years with how little people know about programming. When I ask an eight-year-old what they want to do when they grow up, you get a lot of firemen, police officers, teachers, and professional athletes. When children of today grow up they will more likely work closely with computers, regardless of their profession. No one will be exempt. So it’s important for them to view programming as a tool, like math, reading, or writing, that is to be used to create value for others.

Mr. Hedgpeth sharing with our 1st graders about coding for Computer Science Week. #GESshineon #hourofcode pic.twitter.com/pfb3JGKjPg

— Grapevine Elementary (@GESStars) December 16, 2014

I talked about the people who made this toy, the producers, and the people who bought it, the consumers. When they grow up and produce, they’ll produce solutions that use computers in awesome ways. We call that programming. They’ll call it normal. I ended the presentation with some myths related to programming: | Myth | Reality | | ------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | | It’s only for video games | Programming is for solving problems. Problems are everywhere, not just in video games | | It’s only for adults | My son learning [Ruby](https://www.ruby-lang.org/en/) through [Codecademy](http://www.codecademy.com/) and using [RubyMine](https://www.jetbrains.com/ruby/) | | It’s too complicated | Actually trying to solve a complicated problem _without_ the help of a computer is complicated. Illiterate people think that reading is complicated. This is just another form of literacy. | | Computers will take over the world and we won’t need humans anymore | Computers do one thing well: do what they’re told. You can’t reduce love, empathy, and character to a set of instructions. Being able to work with other people is essential to one’s development and ultimate economic value to the world. | My son and I are working through his math homework using Ruby right now. If that goes well I might post a few YouTube videos explaining it. I think there is a large, untapped group of people who would really be interested in finding programming literacy from a young age. I want to think Mrs. Cox’s fourth grade class for the encouraging thank-you notes on the presentation. You guys don’t know how much those meant to me. I also want to thank [Nancy Hale](http://www.gcisd-k12.org/Domain/2938) for setting it up and encouraging kids to learn coding. --- # Learning Chef Book Review URL: https://hedge-ops.com/posts/learning-chef-book-review/ Discover the world of Chef with the Learning Chef book. Understand and use Chef, including testing techniques and Windows-friendly commands. A couple of months ago I found myself drowning in the learning curve that was [Chef](http://chef.io). I had great support from them, but I’m the type of person who needs to know a technology in order to appropriately evaluate it. I could tell that Chef was a nice technology, but I didn’t know how. I went through [the tutorials](https://learn.chef.io/), but they weren’t adequate for me to understand the solution. Then I found the book [Learning Chef](http://amzn.to/1Ajqayd). Learning Chef is an excellent first step in understanding the Chef universe in order to get started on the right foot with the tool. I absolutely loved the tutorial, incremental approach that [the](http://misheska.com/) [authors](https://sethvargo.com/) take to go from running a recipe on your own machine to running tests on locally available virtual machines. Which leads me to my other pleasant surprise of this book: it lays out the techniques you can use with Chef in order to test what you’re doing, so you know that it works. That is what separates Chef from many other solutions I’ve seen: [they bake testing into the process itself](http://kitchen.ci/). If you’re going to treat infrastructure as code then you _have_ to test it as a part of your deployment pipeline. Fortunately this introductory book doesn’t skimp on this core aspect of Chef. The third great thing about this book is that it is very approachable to those of us who have built their careers programming in the Windows environment. Every command has a hint at what you would do on a windows box. This really increased my comfort level with learning Chef by allowing me to learn it in _my own_ development environment. The book is not for people who want a quick, few hour understanding of Chef to get up and running. For example if I bring a new team on board with Chef, I probably won’t hand them this book; I’ll probably do a couple of day class with them to teach them the basics. If they’re the type of person (like me) who wants to dig deeper though and learns by doing, this book is a fabulous step in becoming proficient at using Chef. --- # Solving SSL Validation failure with knife URL: https://hedge-ops.com/posts/solving-ssl-validation-failure-with-knife/ Solve SSL Validation failure issues with knife after moving to a hosted version of the Chef server. Discover short-term and long-term solutions for your blog or website. After I moved to a hosted version of the [Chef](http://chef.io/) Server, I started getting this problem with knife: ```text knife download environments ERROR: SSL Validation failure connecting to host: chef.yourdomain.com - SSL_connect returned=1 errno=0 state=SSLv3 read server certificate B: certificate verify failed ERROR: OpenSSL::SSL::SSLError: SSL_connect returned=1 errno=0 state=SSLv3 read server certificate B: certificate verify failed ``` There are a couple of ways to fix this: The short-term way is to ignore SSL on your `knife.rb` file with this setting: ```ruby ssl_verify_mode :verify_none ``` The better and more long-term solution is to add this line to the `knife.rb` file: ```ruby trusted_certs_dir "#{current_dir}/trusted_certs" ``` And then run: ```bash knife ssl fetch ``` I then had to ignore the `trusted_certs` file in my git repo. Thanks to [Matt Stratton](http://www.mattstratton.com/) and his colleagues at [Chef](http://chef.io/) for helping me find the solution. --- # A Look Around URL: https://hedge-ops.com/posts/a-look-around/ Explore the journey of a blogger’s first six months, the challenges faced, and the lessons learned. Discover how the focus shifted from professional to personal topics and the impact it had on readership. It’s been about six months since [I started this blog](/posts/christmas-with-teamcity) and I thought it’s time for a retrospective. I’ve enjoyed talking about issues and ideas that have really helped me find new insight and motivation for where I’d like to go. If you haven’t noticed, I have had a bit of a content categorization problem that I’ve finally found the answer to. Let me explain. The motivation of starting my blog was pretty clear to me from the beginning: - [Michael Hyatt](http://michaelhyatt.com/) convinced me in [a podcast](http://michaelhyatt.com/093-10-reasons-every-leader-needs-a-blog-podcast.html) that by not having a platform for my ideas I was not going to be an effective influencer. The path to scaling influence is through creating a platform. The easiest way to begin building a platform is through a blog - I wanted to find a way to find like-minded people who could influence me, and who I could influence - I wanted to solidify a lot of what I was learning by writing about it and explaining it I started out early by writing about a lot of professional topics, then some financial management topics. I saw very quickly that it would not be easy to build a professional oriented platform; the interest wasn’t there. The interest appeared to be there for the more personal oriented topics, so I went in that direction. Over the past few months, I’ve written more and more about the ideas and experiences that I have found useful to accomplish my goals, and my readership has gone down and down. For example in July the readership numbers spiked at 50or so sessions per day. At the moment the spike is more like 20 per day when I publish something. It’s not bad that 20 or so people are reading the blog, but the problem is that a good portion of that are my extended family that I could easily (and more persuasively) share a personal conversation. So it’s clear right now that I need a change in strategy. I continue to work on deployment both for our hosted and our store products at my company. I’m learning a lot of lessons, both through management and through learning new products. In fact, I’ve spent more time learning in the past eight weeks than I had in the past three years combined. I’m really excited about where these ideas will take us, and want to share it with you. If you’re one of my readers who won’t care about that topic, I want to tell you two things. First, thanks for reading and for sticking with me. Second, I’d love to have lunch with you and talk about these topics further. I have a passion for living life well and think that in order to do that one must get off of the normal path set by us by our culture. For everyone else, welcome to the new hedge-ops, which asks: how can we operate most efficiently with management in technology. I’m looking forward to writing about it. --- # Mornings URL: https://hedge-ops.com/posts/mornings/ Explore the contrast between good and bad mornings in this insightful blog post. Learn how being intentional and avoiding phone distractions can transform your morning routine, productivity, and overall well-being. On the bad mornings, I begin by rolling over and checking my phone. I cycle through [ESPN Dallas](http://www.espndallas.com), [WFAA](http://www.wfaa.com), [CNN](http://www.cnn.com), and look at my [feedly feed](http://feedly.com/), personal and work email. The work email in particular, since I work with Czechs who are well into their day, causes me to start thinking about work. My mind races to the world’s problems, the world’s drama, and problems I’ll need to solve in a couple of hours that began a half a world away. Thirty minutes later I get out of bed, make a breakfast, talk to my kids, and then I am late for work. On the good mornings, [my phone is in another room](/posts/sanitize-your-smartphone-with-republic-wireless). I wake up, do a [14-minute workout](http://en.wikipedia.org/wiki/Shovelglove), read a little on the [Kindle](/posts/focus-with-the-amazon-kindle) or the [Economist](/posts/the-economist-keeps-it-real), have some time for meditation/introspection. I think about my life as a whole and how I’m contributing to my goals today in realistic ways. I think about how I want to love and serve others in very real and tangible ways. I eat breakfast, and get to work on time. I have as much time on the good mornings as the bad mornings. The key is being intentional and staying off the phone. --- # Right Fit URL: https://hedge-ops.com/posts/right-fit/ Explore our journey as we navigate a business opportunity, balancing risk and reward, staying true to our values, and seeking the right fit. Inspired by Mark Cuban’s advice. [Annie](http://www.hedge-ops.com/about/annie) and I spent the early part of this week looking into a business opportunity that would help us meet some specific goals over the next few years. We were excited, talking late into the night about the various details of this business opportunity. Then halfway into our discussion we realized [it wasn’t going to work](/posts/failure-masquerading-as-success). Of course, we could have made it work. We could have brought more into the deal and thus introduced more risk. We could have compromised our values or our goals. I was reminded me of [Mark Cuban’s advice](http://blogmaverick.com/2010/08/25/the-best-investment-advice-you-will-ever-get/): get out of debt and save up 100K in cash. Why does he give this advice? Because if you have cash that you can invest, a right fit will come along and present itself. You don’t have to go try to make things work. I don’t have to force anything. I can stay true to my values. And I should continue to live a simple and frugal lifestyle. If I do this, the right fit will come along. --- # A European Education URL: https://hedge-ops.com/posts/a-european-education/ Explore the differences between European and American education systems in this blog post. Discover how shorter school days and vocational training could potentially benefit students. A must-read for anyone interested in education reform. Yesterday I was at lunch with a Czech friend of mine talking about education. His daughter just started the first grade and this week they had their initial teacher conferences. The teacher informed him and his wife that the daughter had a hard time focusing. “Of course she can’t focus; they have her there from 8AM to 3PM. I wouldn’t focus at that age either!” It never occurred to me that there was anything out of place about the duration of the American elementary public school. I asked him how long school was in the Czech Republic. He said that first-graders would be out by 11:30 AM. Wow, what a way to solve the problem of so many kids who are diagnosed with ADHD but may just need to run around and climb a few trees. Our exchange student last year brought some perspective as well. She explained that when kids in Czech are in the fourth grade, they separate the _academic_ ones from the non-academic ones. The academic ones are prepared for the university. The non-academic ones are prepared for a vocation. So instead of forcing a bunch of people into a mountain of student loan debt that puts them in a career or job that they don’t perform in, we could train them to _not_ go to college, save a ton of money, get back years of earning instead of going to school, and live much more confident and productive lives. Instead of labelling kids as being bad because they can’t (unnaturally) sit in a room for seven hours with a few breaks in between, we could limit the amount of time young students spend in school. I absolutely love having friends with different perspectives and [life experiences](/posts/life-is-art). The American system is not the only good system; it’s not even the best in most things. Without my European friends, I would never see that. --- # Two Questions You Should Ask About Your Commitments URL: https://hedge-ops.com/posts/two-questions-about-commitments/ Reevaluate life’s commitments with two questions: How’d you feel without it? Can you freely reject it? Prioritize for a balanced, fulfilling life. Lately we have been too busy, and we have had to figure out [what is working for us](/posts/achievable-contentment) and [what isn’t](/posts/failure-the-catalyst). Usually when we get this way, our entire lives are filtered through two questions. I’d like to share them with you to think about what the answers are in your life. First, make a list of the things in your life that you’re committed to. I’ll wait. OK, now for each of those things ask, “How would I feel if this wasn’t in my life? Relieved?” Then perhaps it should be a candidate for either changing or removing it from your life. How you feel when imagining something is gone is the best way to know if you really want to do something. Now for the second question: What would be the reaction of others if you _did_ say no? This is a tricky one, because it shows the long-term health of your relationships. I can tell my wife, “No, I don’t want to do that. Let’s think of an alternative.” She would respond with love, acceptance, and we would work something out. I couldn’t tell my unhealthy church from college that. If you had asked me at the time to imagine it being gone, I would have sworn up and down to you that I loved it, etc. But then if you had asked me if I was _free_ to reject it, if I was being honest I would have to say no. So rid yourself of everything you are relieved to get rid of. Only keep that which you can freely and openly reject. --- # Focus with the Amazon Kindle URL: https://hedge-ops.com/posts/focus-with-the-amazon-kindle/ Boost your reading focus with Amazon Kindle: distraction-free, long battery life, and sleep-friendly lighting. Enhance growth and knowledge effortlessly. I’ve been writing about [focusing on what matters](/posts/life-is-art) lately by [making good choices](/posts/achievable-contentment) with how and what you consume. Some things are just a complete waste of time like the [local news](/posts/rubbernecking-with-the-locals). Other things have a good alternative, like [reading the Economist](/posts/the-economist-keeps-it-real) instead of going to [news sites](/posts/escaping-with-the-news). As I wrote last time, though, in this mobile world that isn’t enough. You have to make sure your smartphone is serving you and not enslaving you. And beyond that I’m convinced to lead a life of growth and meaning you need to read. Probably more than you are right now. So how do we do that? The solutions that come to mind immediately are: on our smartphone, a paper book, or a tablet. I’m going to talk about a better way to read books: through the Amazon Kindle. A few years back a friend asked me what I thought about the Kindle, and my opinion to him as of July 16, 2010, was: “_It seems like having a book would be better._” Well, thankfully my friend didn’t listen to my awful advice and bought one, told me about it, changed my mind, and I haven’t looked back. Here’s what’s so great about the Kindle: 1. _Distractionless:_ yes, that’s a feature and not a bug. I like it that there aren’t pop-ups telling me all about the emails I’m not reading or the cool Facebook posts that I need to be looking into. I’m here to read. The Kindle helps me read. 2. _Battery Life:_ I don’t have to be preoccupied with powering the stupid thing while I’m using it. I can read. I have to charge it every once in a while. It’s not like the power sucking smartphone or tablet. 3. _Position Independent:_ this may seem small to some, but I like reading while I’m laying down. And I get comfortable holding a book in one hand. Then I read fast and get to the other side of the book. So I have to move to read the other section. It seems like a small thing, and admittedly a first-world problem, but I love how I can read the Kindle with one hand laying on the same side for as long as it is comfortable. 4. _Sleep Friendly Lighting:_ I have the Kindle paper white that doesn’t blast light into my eyes and keep me up. I can do a little nighttime reading with a very small amount of light. It also doesn’t distract my wife. 5. _Borrower Capable:_ I have learned to take advantage of my library’s borrower privileges on the Kindle. This is easy once you figure it out. I do admit to buying more books than I would have before, but I think that’s a good thing as long as I read them. 6. _Keeps Your Place:_ I pick up the book and read exactly where I left off. I don’t have to think about it. I just go. 7. _Read More:_ I’ve read much more than before. If I’m interested in a topic, I can get a book on that topic and read through it quickly while I have a passion and interest in the book. 8. _Portable:_ I admit to not doing it very often, but if I want to, I can read on my smartphone or even my computer to continue my reading. I highly recommend going for a Kindle. You need to read to meet your goals, and in my mind there is no better way to do so than with the Kindle. --- # Sanitize Your Smartphone with Republic Wireless URL: https://hedge-ops.com/posts/sanitize-your-smartphone-with-republic-wireless/ Optimize phone usage with Republic Wireless: affordable plans, Wi-Fi data, and reduced distractions. Prioritize real-life connections over screen time. The invention of the iPhone will probably be one of the key technological events of my lifetime. It changed the game from quirky Blackberry kind-of-phones, to a new experience that delivered a whole set of new capabilities in people’s lives. It has changed [the businesses I serve](/posts/ten-takeaways-from-the-last-10-years-at-radiantncr) and will continue to change our lives for years to come. Once I finally got an iPhone it took over my life. This coincided with the birth of my children, so I spent a lot of time on the iPhone while they were at the park, or sleeping, or messing around at our house. Then I realized that I will never get those moments back. I realized that the promise of convenience and greater efficiency does not line up with [my personal goals](/posts/achievable-contentment) because my personal goals are to [live a life of meaning, love, and connection](/posts/life-is-art), especially with my family. To put it more simply, I was on my phone and ignoring my family. What to do about this? I considered going away from a smartphone. That didn’t seem possible because there were times when it really did serve me. I tried to make a few rules like, _Stop ignoring my children_, but those didn’t seem to work when the siren song of Facebook and email came singing. On top of that, I was spending over a thousand dollars a year on this habit that wasn’t serving me. Then I learned about a better way: Republic Wireless. With Republic Wireless, I bought a phone for $300\* and now spend $10 a month on my mobile phone. I get data through Wi-Fi, which, by the way, exists at home and at work. I don’t get data on the way home from work. If I’m traveling, I get the option to change to the $25 a month plan that has data. I can even change it back to the $10 plan when I get home and only pay for the $25 plan prorated for the amount of days I used it. This means that I don’t instinctively go for the phone when I’m out and about. It’s a tool; it’s not my master. It also means if I need it to direct me somewhere I can type in directions before I leave when I’m on Wi-Fi, and the smartphone will get me there. I feel like I have the best of both worlds. Combine that with my $10/month landline, I now have no excuse to be on my phone ever. It’s an accessory. It’s not my whole life. Now I can live with the best elements of it and leave the rest alone. If you’re interested in freeing your life from the insane addiction from the smartphone, you can get $10 off of Republic Wireless (as well as give me $10 off on my bill) by going [here](http://rwshar.es/6lhy). Enjoy! - You can get them even cheaper for $99 for an off-contract phone. Do the math, and you’ll save a bundle while being able to spend less time on your phone. It’s a win-win! --- # Facing the Ultimate Distraction and Often Losing URL: https://hedge-ops.com/posts/facing-the-ultimate-distraction-and-often-losing/ Facebook promises social connections but often reduces face-to-face interactions. While it keeps us updated, it might be diminishing genuine conversations. It was around 2008, and I heard about this new way to interact with people in your world. We were at a family gathering, and I started talking about this great application I had found called Facebook. My sister-in-law was delighted that we had finally caught up with her and that we were all going to join in this wonderful new thing. Those were the days when we were all so clueless about social media that Facebook had to cue us to post stuff, with “Michael is…” and then you wrote your thought into the text area. So the posts were things like “is enjoying my [s-day](http://www.nosdiet.com/)” on August 23, 2008, and “is getting ready to go to India” on September 25 2008 (almost exactly six years ago). It’s been great. I’ve stayed connected to people and shared in the lives of others in a way that I would not have before. I have a confession to make though: I have a very conflicted relationship with Facebook. To understand, let’s filter the last few posts I’ve written on through the lens of what it promises and what it delivers: | Category | What it Promises | What it Delivers | | ------------------------------------------------------------------------------------ | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------- | | [Traditional News](/posts/escaping-with-the-news) | Your world doesn’t matter as much as all of these important people doing important things. Know about them so you can influence them | A visceral and frustrated passion for things over which you have no control | | [Sports News](/posts/sports-news-soap-operas-for-people-who-make-fun-of-soap-operas) | The game will be much more fun if you know _everything_ there is to know about it. So know the gossip and facts behind everything you are watching so your hobby can become your life! | Paying way too much attention to sports is a substitute for real growth and changes in your life. | | [Local News](/posts/rubbernecking-with-the-locals) | This is _your world_, these are _your neighbors_, this stuff can happen to you. So you better pay attention before your lack of attention can cost you dearly. | None of this affects your life, it’s just used as a distraction. | | Facebook | This really is your world. We’re serious this time. This really does affect you. Why would it not? These are your friends! So pay attention so you can live a more informed life pertaining to your world around you | This replaces smalltalk which in reality will hurt your relationships with others, not help | Hopefully that illustrates it for you. The promise of Facebook is so alluring: this is a situation in which being informed can really pay off. After all, this is your life! The problem is there is not a very easy way to filter out the noise within Facebook to get to the reality. And the second problem is that even when people are informed about your life through Facebook, when you finally meet face to face _there is nothing to talk about!_ You read that correctly. Facebook, rather than making you _more social_ actually makes you less social. You already know the things about people that you used to talk to them about to gain interest. So over time you talk to people in smalltalk less and less. Over time, you are less and less social. So this is my conflicted relationship with Facebook. On the one hand I like being in tune with friends and family. On the other hand, that comes at the price of a lot of checking and time. I’ve considered over and over giving it up all together. But as my brother says, [leaving Facebook is the adult’s equivalent of running away from home](http://weknowmemes.com/2011/11/quitting-facebook-is-the-adult-version-of-running-away-from-home/). I wish I had some answers for you on this one. On paper this is something I should give up, but somehow I keep coming back to it over and over again. Maybe in a year I’ll have a good balance. But right now I don’t and I just wish the whole thing would go away. Maybe that means I’m ready for a break. --- # You Have Time URL: https://hedge-ops.com/posts/you-have-time/ Shift your mindset: Instead of lacking time, prioritize what truly matters. Eliminate time-wasters like excessive news consumption to focus on personal growth. I’m taking a break from [my hatred of all things](/posts/rubbernecking-with-the-locals) news to take a step back and write about _why_ I’ve been so negative in these recent posts about the news. I’m not against entertainment. I think it serves a valuable role in our lives. I like watching a football game or hearing about things that are none of my business. It’s a lot of fun. I just no longer fool myself into thinking that it matters. It’s just entertainment. This isn’t life or death. When I read books, so I can grow into a better person, that takes time. When I [ride my bike](/posts/engineering-travel) to work every day instead of driving a car, that takes time. When [I read](/posts/focus-with-the-amazon-kindle), spend time with my family, volunteer, that all takes time. And our first reaction to changing things up in order [to live out our values is](/posts/life-is-art): I don’t have time. I disagree. You have time. Yes, I said that right. You have tons of time. Not having time means that you don’t have time to consume the news. Not having time means that you never watch TV for anything ever. Not having time means that you don’t get a full night’s sleep. Not having time means that you never do anything that doesn’t involve feeding or sheltering your family. You have time. My time waster is the news. I used to spend literally hours a day reading and thinking about it. Yours might be different. Whatever it is, see it for what it is, put things in perspective, and focus on what really matters. The question is not: When will I have enough time to do what matters? The question is: When will I stop doing what doesn’t matter, so I’ll have enough time to do what matters? That mindset change will change your life. --- # Rubbernecking with the Locals URL: https://hedge-ops.com/posts/rubbernecking-with-the-locals/ Explore the phenomenon of rubbernecking in the context of local news consumption. This blog post challenges you to evaluate the value and impact of the news you consume daily. We’ve all been there. Bumper to bumper traffic. Lights ahead. An ambulance speeds by in the shoulder lane. We inch by for a few minutes, and finally make it to the accident. Then something inevitable happens: _What happened!?!?! I need to brake and see what happened! Are they going to be OK? Do I know them? Is there…blood?_ Rubbernecking. This is exactly what is happening in the local news. I’d like to give you a test to take right now: 1. Go to your [local news site](http://www.wfaa.com). 2. Count how many articles that fulfill these criteria: 1. You will remember them a month from now 2. You didn’t hear about them from another source 3. They will affect your life in some way I just did this as of this writing, on September 5. I counted zero articles. I think that you might come up with a similar amount. What’s going on here? We like knowing that no matter how bad things are for us, or how crappy our day is, there are at least a few people on the news that have it worse. What value does this serve us? Logically speaking, absolutely none. This is a worthless activity. It isn’t even [entertaining like sports](/posts/sports-news-soap-operas-for-people-who-make-fun-of-soap-operas). It has no significance in the lives and livelihood of all but a very small number of people. So I’m trying to wean myself off of my local news addiction. I’m not replacing it with anything. I’m just stopping. I’ve had varying degrees of success over the past few months, because I love reading the local news. It’s worthless. I hope to fully believe that one day. --- # Sports News: Soap Operas for People Who Make Fun of Soap Operas URL: https://hedge-ops.com/posts/sports-news-soap-operas-for-people-who-make-fun-of-soap-operas/ Explore the parallels between sports news and soap operas in this blog post. Discover how sports news often focuses on drama and intrigue, much like a soap opera, and why it’s important to remember it’s just entertainment. In the past I used to listen to _The Ticket_ every day on the way to and from work. I would follow the local teams through ups and downs. When the Dallas Cowboys won their only playoff game in the last almost twenty years, I was turning on The Ticket to hear their analysis. When the Mavericks won their World Championship a few years ago, I celebrated with my friends on the Ticket. Then I realized something: Sports News is even more worthless than _Real_ News. Yes, you are hearing me correctly. I [just finished](/posts/escaping-with-the-news) outlining everything wasteful and dumb about sites like CNN and Fox News, but I’m saying sports news is even worse than that. The difference: at least in _real_ news we are talking about people’s lives and well-being. In sports, we are talking about a game using the same medium and tone that we talk about people’s lives and well-being. A couple of observations really hit this home for me: Sports news _loves_ the soap opera story. It’s funny to think that way, too, since so many sports fans are uber-masculine and would never admit to enjoying soap operas as entertainment. A few examples (as you read these in the tone of a media obsessed teenager talking about a soap opera): Did you hear about [Ron Washington](http://espn.go.com/dallas/mlb/story/_/id/11471420/ron-washington-quits-manager-texas-rangers)? He _resigned_ today as manager of the Rangers for personal reasons. Jon Daniels says it _isn't_ about drugs. But who knows? I wonder how the players are feeling? Did you hear about the [number of penalties being called in NFL preseason](http://insider.espn.go.com/blog/nfl/rumors/post/_/id/24032/will-refs-pocket-flags-in-regular-season)? I wonder if, like, the NFL will call those during the _regular season_? This might like change the whole game. Did you [hear that Wes Welker](http://espn.go.com/blog/denver-broncos/post/_/id/8501/wes-welker-roundup-everything-you-need-to-know-about-his-suspension) is _suspended_ for _four games_ for taking drugs? I wonder what drugs they are? What is all of this about? Just like the news, this is about sucking people into intrigue over things that don’t matter, so they’ll be sitting there when the advertising comes on. It’s about selling you stuff you normally wouldn’t buy. Does the nature of Ron Washington’s resignation matter? No. Is there any effect that you can have on the number of NFL penalties by reading the article? No. No effect. You. Are. Wasting. Your. Time. Does it matter what Wes Welker did or did not do? To him and his family, it means he makes a few hundred thousand less from multiple millions. To us, it means nothing. It means nothing. Let that sink in, sports fans. This is entertainment. It’s not your life. It’s not your family. It’s not your livelihood. It’s a business run by people who make a lot of money tricking you into thinking it matters. With that in perspective, by all means go to a game or watch it on TV and enjoy. But leave it at that, and find better things to do with your time. Over time, even that will begin to fade away. --- # The Economist Keeps it Real URL: https://hedge-ops.com/posts/the-economist-keeps-it-real/ Discover why The Economist is a superior news source in this blog post. Learn about its international focus, lack of distractions, and balanced reporting. Ditch the drama of mainstream media today. Over the past year I’ve grown to see [the absolute insanity](/posts/escaping-with-the-news) of following the news on normal news sites like CNN or Fox News. You might be thinking that I have no clue what is going on in the world. I do value being an educated person who knows about world events. I have a secret weapon that helps me get this insight into the world without driving myself crazy day to day. It’s [a paper subscription](http://www.subscriptionaddiction.com/magazines/subscription/the-economist-magazine-magazine.jsp) to [The Economist magazine](http://www.economist.com/). Here are seven things I love about having a paper subscription to The Economist: 1. _International._ It’s not focused on Americans only. I get more of an insight into the world than the typical American news outlets. 2. _Soap Opera Free._ It doesn’t try to get me sucked into idiotic stories about how people are going to react to this or that, or worse what a celebrity was doing last Friday night. I want to be an educated man who knows world events…not a celebrity gossip connoisseur. The subscription keeps everything serious and leaves the rest out. 3. _Distraction Free._ When I read the magazine, I don’t see emails popping up. My kids don’t think I’m playing video games. They know I’m reading a serious news magazine. And when I put it down, I do something else. I don’t get sucked into ten hours of mindless video watching. This is a feature and not a bug. 4. _Not Republican or Democrat._ I love it that I get out of the false American dichotomy of Republican and Democrat with the magazine. It has no interest in getting me to be _more_ one way or another. It just calls it like it sees it. Which is probably more of a Libertarian bent, but honestly it’s difficult to tell sometimes. And that’s how I would like to take my news. I don’t want to be told what to think. I want some facts and fair analysis and I want to think about it myself. 5. _Weekly._ I don’t think that the appropriate news cadence should be any more frequent than once a week. If something is important I’m going to find out about it. I don’t need to spend every day reading the news. It just doesn’t matter that much. So the fact that the Economist is weekly is perfect for me. 6. _Guilt Free to Skip._ This isn’t a _feature_ of the magazine as much as it is for me. If I had a weekly subscription and didn’t read _any_ of it, I still think it would be worth it. Why? Because it kept me from the hours per day of distraction I had been getting by going to news sites. So the Economist can be skipped just fine. 7. _Diversified._ It’s not just a political magazine. It also has stuff on business, technology, science. It has special features that bring you up to speed on a topic in depth. I have subscribed to the Economist for months now and absolutely love it. I highly recommend you ditch your current news habit and buy an Economist subscription. I did some searching and found a deeply discounted deal [here](http://www.subscriptionaddiction.com/magazines/subscription/the-economist-magazine-magazine.jsp). Try it and you’ll be hooked. If you run into me personally, I’ll be happy to give you a copy, so you can see what I’m saying; just ask. --- # Escaping with the News URL: https://hedge-ops.com/posts/escaping-with-the-news/ Explore the impact of news consumption on our lives and emotions. This blog post challenges the relevance of political news and encourages readers to question how news affects their daily lives. I was having a great time at a party at my house recently, when a friend of mine who is quite conservative started talking to me about [the Benghazi incident of September 2012](http://en.wikipedia.org/wiki/2012_Benghazi_attack). He went on and on about how outrageous it is. I respond, “I just don’t see how any of this is important or relevant to my life.” With an outraged tone, he responds with eyes wide open in outrage, “You don’t think _Benghazi_ is important?” To my life? No, I don’t. In fact, I can take that a step further: the news, especially political news is designed to get people to get outraged over things that are not in their power to change or influence. This state of frustration and anger puts them in a place where they are willing to buy more things in the advertisements, which is good for the provider of the outrageous news. But for the person who is consuming it, it is the very definition of a waste of time: spending time on something one doesn’t find enjoying, can’t do anything about, and causes them to act in ways that are outside their values. This is insanity! I’d like to propose we ask a question about every news article that we consume from now on: “How will my life change as a result of me knowing this information?” I propose the answer is, for the most part, “none at all”. To illustrate, let’s comment on some CNN> and Fox News headlines as of the date of first writing this, July 26, 2014: | Story | How will my life change as a result of me knowing this information | | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | | [Rockets fired; Israel-Hamas truce appears over](http://www.cnn.com/2014/07/26/world/meast/mideast-crisis/index.html?hpt=hp_t1) | None. The rockets likely won’t make it to North Texas. There is no election today to influence our political establishment, and political leadership in the U.S. seem to be aligned (for the most part). | | [U.S. evacuates embassy in Tripoli](http://www.cnn.com/2014/07/26/world/africa/libya-us-embassy-evacuation/index.html?hpt=hp_t2) | None. I am in North Texas, not Tripoli. I have nothing to be afraid of by whatever is happening there. | | [Joe Paterno Feared Wrongly Accusing Sandusky, Son Says](http://www.foxnews.com/us/2014/07/26/joe-paterno-feared-wrongly-accusing-sandusky-son-says/?intcmp=latestnews) | None. I am not nor do I have any friends who are a part of this case. While it is a tragedy, me knowing about the state of mind of one of the participants of the case, who is now dead, will have no bearing on the outcome of the case whatsoever. | | [Officials cite marijuana as reason for rise in Denver homeless](http://www.foxnews.com/us/2014/07/26/officials-cite-marijuana-as-reason-for-rise-in-denver-homeless/?intcmp=latestnews) | None. I live in North Texas, not Denver. I do not use marijuana. I suppose if I did this news article wouldn’t change my mind about it. There is no marijuana vote on the Texas ballot today for me to vote on. | | [Some in CIA ‘torture’ report denied chance to read it](http://www.foxnews.com/politics/2014/07/26/some-in-cia-torture-report-denied-chance-to-read-it/?intcmp=latestnews) | None. I am not subject to torture or defining what torture is in the coming months as far as I know. In fact if I were to be tortured, I’m not sure that this report would have anything to do with how I would be treated. | | [Fast food workers vow civil disobedience in wage fight](http://www.foxnews.com/us/2014/07/26/fast-food-workers-vow-civil-disobedience-in-wage-fight/?intcmp=latestnews) | None. I don’t work at McDonald’s. I also don’t eat there very often. I suppose if their wages do go up, the price of a Happy Meal will eventually go up, thus creating an unexpected surprise on the rare occasion that I go to McDonald’s. But we’re quite a ways from this: they are vowingcivil disobedience. Nothing has actually happened. This affects my life in no way whatsoever. | Let me give you a little guarantee: If you play this game for a few days on the news sites you are visiting, you will very quickly be awakened to the reality that this all is a complete and total waste of your time. You are a pawn in someone else’s game to get you to buy stuff. There actually isn’t news on almost every day. And when there is it can be summed up in a ten-second brief description. So let’s stop investing so much time in _news_ and make some _real_ news in our own lives. --- # Escaping URL: https://hedge-ops.com/posts/escaping/ Discover how to live in the present and escape the distractions of modern life in this insightful blog post. Learn how to focus on what truly matters and stop running from your real life. I get home [from work](/posts/ten-takeaways-from-the-last-10-years-at-radiantncr) and am I daydreaming about a feature I need to add to my software as my son tells me about [Minecraft](https://minecraft.net/). I finally have a few moments of free time and spend them reading about movie stars and wars on the internet. At work, I have something really important to work on, but I read the latest tech news instead, and I get involved in a long conversation with a colleague about office politics. What is going on here? I think it’s simple: real life is lived right here and right now with the choices you make. It’s not at work when you’re at home. It’s not with movie stars. It’s not in office politics. It’s not even on Facebook. It’s in your own choices _right now._ Our problem is that many of us don’t like or are scared of the choices we have in front of us. If we were really honest with ourselves we’d have to admit that we just don’t want to deal with our _true_ lives. We really want an escape. So I run away from my present choices and go somewhere else. At home, I go to work or to the lives of famous people. At work, I move away from important decisions to politics or the unimportant. When relating to others I am preoccupied with my Facebook or Twitter status. Or I keep thinking about how everything will be better in the future once my plans come to fruition. This is insanity. Do you want a [beautiful life](/posts/life-is-art) of meaning and purpose? Stop running away from it. To do this you’ll need to set up a lifestyle that blocks out those distractions that keep you [from what truly matters](/posts/achievable-contentment). You’ll need to figure out how to ignore the news. You’ll need to figure out how to stop checking your phone like a crack addict every five minutes. You’ll have to learn how to live a life that is present in what is happening _right now_ in _your life_. This is the journey I’m on, and I’m by no means perfect at it. We’re in this together. What are you running from? And what are you using to distract yourself from what really matters? --- # The Default Script URL: https://hedge-ops.com/posts/the-default-script/ Explore the concept of ‘The Default Script’ in our society and how it impacts our pursuit of happiness and meaning. Discover why this script may not lead to fulfillment and learn alternative ways to live a meaningful life. We want our lives to be [a work of art](/posts/life-is-art) and reflect the beauty and meaning of our values. Most of us in search of meaning in our lives will naturally absorb our culture’s script for us: > _To create a beautiful work of art, you must feel good. To feel good, first you must make good money. You need to go > to school and then get a good, stable job._ > > _Once you get the job, start maximizing convenience. Pay for things you no longer want to do, including mowing the > lawn, making meals, washing your car, fixing your house, going to the store. You work hard, so you don’t have to do > these things._ > > _Also, make sure you are as comfortable as possible. Get a nice big house you can be comfortable in, a nice, new car > with leather seats that feel great, and stop at the ice cream store and give yourself a treat that makes you feel like a > kid again. You deserve it because you work hard._ > > _You won’t be happy unless other people respect and want to be like you. So when you get a car, get something that > will show everyone how hard you have worked. When you go on vacation, make sure it’s something totally awesome that will > show how you have great taste and can afford the finer things in life. You want to be the best you can be. You’ve worked > hard for this._ > > _You need to be happy. That’s what life is about. So make as much money as possible and spend that mainly on three > things: convenience so you can focus on the things that really matter, comfort, so you won’t be distracted by annoyances > and pain and can focus on enjoying life, and respect, so you can be validated that you have done something meaningful > with your life._ What’s wrong with this? Well, nothing if it works. The problem is, when we think through this approach we don’t actually see anyone reaping its benefits. Instead, we see millionaires that need more. Actors who are on drugs. Affairs. It’s an absolute mess out there. There’s never an end. There’s never enough. This isn’t a good way to create a life of meaning and value. It’s a great way to sell stuff though. This script does nothing about the biggest human problem: no matter how much you have today, you always need more. Once you realize that, the above description becomes silly. [You’ll never have too much convenience](https://www.youtube.com/watch?v=uEY58fiSK8E). No matter how much comfort you have, you know people who have more and that makes you feel…uncomfortable. There will always be someone who gets more respect than you do. Your needs are _insatiable._ This is unavoidable. The solution? Accept insatiability as a fact of life, dealing with it on a regular basis, and you will have a shot at sanity. How do you do this? Keep reading to find out. --- # Life is Art URL: https://hedge-ops.com/posts/life-is-art/ Explore the concept of life as a work of art in this blog post. Discover how simplicity, purpose, and tranquility can shape your canvas, and how every life is a unique masterpiece. Inspire and be inspired to create a beautiful life. Earlier this year, I was pretty excited about some [drastic changes](/posts/achievable-contentment) we had made in our lifestyle. Whenever I’m excited about something, I must tell someone and my friend at a dinner party at my house was no exception. I excitedly pointed out all the ways that simplicity has led us to a life of greater contentment and tranquility. I could sense my friend becoming increasingly uncomfortable. “Well, not everyone can live that way.” My friend assumed that I view life as a science; that there are certain rules that _must_ be followed and a right and wrong way to do things. I completely disagree with that notion. To me life is more like a work of art. You have a canvas on which to paint decisions, consequences, and ultimately meaning. Every life is as different as every work of art. When a painter presents her painting, she is expressing herself in a unique way that is unlike anyone else. The painter has _her_ way of painting. To her, it _feels_ like the right way, because, of course, she did it that way. Another painter may do it totally differently. A wonderful and mysterious thing happens in the gallery: each painter admires the brilliance and beauty in the other’s work of art. So I am creating a work of art that I hope to be beautiful and meaningful. I’m not creating a life of the entrepreneur who spends her nights and weekends creating a groundbreaking service that improves the lives of millions of people. But I can appreciate her work of art. I’m not creating a life of the philanthropist who travels the world to find a way to cure malaria worldwide. But I can appreciate his work of art. I’m not creating a life of the video game developer who works for two solid years to create a game that captures the imagination of a teenager who stays out of trouble because he has something to do when he’s out of school. But I can appreciate his work of art. I’m not creating a life of a hospice nurse who works the night shift helping dying patients find dignity and meaning in their final days. But I can appreciate her work of art. I’m creating a life of tranquility, meaning, and purpose. I do this through simplifying life in every facet, achieving in the marketplace through a fearless pursuit of adding value, and creating margin in my life to build a legacy with those around me, especially my family. This is my work of art. This is the journey I will share with you. It isn’t the only way. But I want it to be beautiful, and I want to share it with you because I hope we can inspire each other to make our lives a beautiful work of art. --- # Getting Things Done Action Plan URL: https://hedge-ops.com/posts/getting-things-done-action-plan/ Discover how to implement the Getting Things Done (GTD) methodology in your life with our step-by-step action plan. Learn to collect, process, prioritize, delegate, and review tasks effectively. Start your GTD journey today! So you’ve been reading along, and you want [to implement GTD](/posts/mind-like-water) for yourself, but you don’t know where to start. I’ve been there before. More than once. I’ve started, then stopped, then started again. What do I tell people to do? _First, Collect Everything._ Go through your entire life: your email, mail, closet, garage…everything. Put it all in an inbox. Make a list of everything on your mind and put _that_ in your inbox. You want to get everything out of your mind and into your system. Now that you have it all in once place, you’ll: _Get All Inboxes Empty._ Take everything off the top and process it as we went over in the process post. No exceptions. You may not pay that bill right now (but if it takes two minutes you should), but you’ll have it in your system to do it later. _Decide Your Yes._ you now have a big list. You were saying “no” to quite a bit of that list, so it’s not going to be any different from what it was before. Only now you’re going to be conscious about it. So what do you want to do someday? What do you want to do now? What are your focuses? Put the things you want to focus on in a list with your name. I have a michael list. _Share and Delegate._ Do you need to do everything? If not, create another list and share it with that person. Talk with them about your joint goals and get agreement on what to do next and when. Set due dates and follow up. I have a “home” list for home projects with my wife, and a “money” list for financial stuff that needs to be done with my wife. At work, I have a list for every project I’m working on and share with the people who are on that project. Not everyone updates or is involved, but I’ve been pleasantly surprised at how receptive people are to having a project plan right there for them to think about. _Review Regularly._ Set recurring tasks in [Checkvist](https://checkvist.com/) to review your lists or a calendar if you don’t think you’re going to check it. Reviewing is key to the GTD methodology so do something that will get you looking at it. _Keep Saying No._ Every day you’re going to say no to stuff. You should get comfortable with that. The minute you think “I don’t want to do that today” and it’s due today, change or remove the due date! You have the power to make Checkvist show reality, and by all means let it show you reality! Don’t settle with wishes when reality is just a few edits away. I hope you get as much out of GTD and Checkvist as I have. I’d love to hear how your implementation goes if you’re convinced that you need to try this. I’m convinced that GTD isn’t about the tool and so you can implement it any number of ways. However, Checkvist is the best one out there, hands down! --- # Mind Like Water URL: https://hedge-ops.com/posts/mind-like-water/ Discover how to achieve a ‘Mind Like Water’ through the Getting Things Done (GTD) system. Learn to react appropriately to life’s challenges and boost your productivity. The result of [Getting Things Done](/posts/productivity), when implemented properly, is _Mind Like Water_. When I first heard the term, I thought it meant, “peaceful, still, quiet.” That’s pretty silly though because water is in many _other_ states as well, just as often! What _Mind Like Water_ really means is that you _appropriately_ react to your world. If a small pebble is thrown into a pond, it makes a small splash. If a large boulder is hurled into that same pond, it’s going to make a huge splash. This is how our minds should work. The problem is we get tricked into reacting to pebbles as if they are boulders. How to get out of it? Let’s review the system: - _Collect_ helps you know that you can get to it later, and focus on the things that are happening now. - _Process/Organize_ helps you get the small things off your plate, so you can focus, and organize the large things into actionable steps you know you will come back to later. - _Review_ is the essential piece where you start trusting that you will indeed remind yourself of things that you thought of before and take appropriate action. So for now, relax and focus! Do you see a pattern? Everything in this system is oriented around you forgetting about everything around you and focusing on the next important thing. When a big thing comes along, you collect it, process it into your system, and do it. You appropriately react to it. All the pieces of the system must be in place for you to reach this state of tranquility and productivity. And I only reached it when I used [Checkvist](https://checkvist.com/), because that was the only system where I could get a good review workflow going and it is so adaptable because of its free form structure and keyboard centricity. The strange thing about GTD is the lack of focus on organizing what you do. Other systems will focus on prioritization, or assigning an _A_ to the most important things. With Checkvist and GTD, once you organize your outcomes into actionable steps, and trust that you don’t have to keep juggling a million things, _what you need to do becomes obvious._ What do I need to do today? Let me review my lists and the most important thing will pop out at me. I’ll say “no” to all the rest by delegating them or saving them for later. There is no need for priority codes. There is no need to make it bold, or red, or flashing. The system will work, and will tell you what to do. So get to work and get something done! --- # Review the Glue URL: https://hedge-ops.com/posts/review-the-glue/ Discover the secret to achieving your goals and staying organized with our review of the Checkvist system. Learn how to effectively process and review tasks for improved productivity. It happened to me over and over again. I got sick and tired of being disorganized, of missing things, of not meeting my goals. “I am going to make a system.” I wrote everything I need to do down in a to-do list. I made sure my emails get processed correctly. And a few weeks later it was as if none of that ever happened. Why did this happen to me over and over again? Unlocking this secret was the key to actually implementing Getting Things Done. Here’s the key: your system won’t work unless there is a place in it for you to review what you have processed. Think about what we talked about earlier regarding your cluttered mind. You can’t tell your mind, “Stop thinking about that, I’ll get to it later” unless it can trust that you _will_ get to it later. Think about the empty inbox. Are you _really_ going to put that important email in the @Actions folder if you don’t trust that you’ll keep it there? No. What you’ll do instead is keep the _important_ emails in your inbox because, really, you don’t have a review system in place. This is where [Checkvist](https://checkvist.com) really shines for me. I review items in Checkvist two ways: _Due Dates._ If I need to get something done within a certain amount of time, I’ll set its due date. Every day I’ll check the “Due” screen (shortcut dd) and make sure I’m not missing anything. I’ll even set due dates on items that I _commit_ to doing by that time. For example no one is forcing me to read a book by August 1, but I put that as a due date to tell myself that is my goal. _Review Repeating Task._ I set up a task called “Review _\[list\]_”\* for every list I have that recurs as often as I need to review it. For example, I have a “Review money” task that recurs every Sunday and Wednesday with financial tasks on it. This is “Due” those days, so it will shows up in my due list that I’m checking from the above point. This way I only have to remember: every day look at my due list, which will tell me that I need to review my list. That’s my system. Other people can schedule meetings with themselves but that never worked for me. I need to review the areas of my life more regularly than that. And, as one who has tons of projects going on both personally and professionally, I have an even more complicated workflow that I’ll share in a future post regarding my reviews. But the above system I think works as well as anything. The bottom line is make sure you have a system to review things, or else you don’t have a system. - Hint: When you set up the review task, type `Review [lst:` and the list will pop up. Select the list, and you’ll be able to navigate to the list with the gg shortcut. --- # Process and Organize URL: https://hedge-ops.com/posts/process-and-organize/ Discover how to effectively process and organize your tasks for better productivity. Learn the steps to transform your ‘stuff’ into actionable tasks, achieve inbox zero daily, and maintain sanity in your life. When I get home from work, I always go to my mailbox, take a few letters off of the top of my pile, open them, and put them back in the mailbox for later. Said no one ever. It’s interesting to me though that is exactly what people do with email. This is insanity. In order for you to have sanity in your life, you need a system that tells you, “What of my life have I put in my system to act on later, and what do I need to do, so I can take action on them at the appropriate time?” This is the process and organize phase of Getting Things Done. Here’s how I do it: I go to each of the collection places I set up in the Collect Phase (physical inboxes, evernote, and email accounts), and follow this workflow: 1. _Is this actionable?_ Is there anything I need to do with this at any conceivable point ever? If not I should delete it or store it for reference, but my experience with this item is _gone forever_. I will never see this again, and it will not invade my life ever again. 2. _Can I do it in two minutes or less?_ If I can then I do it. There’s no sense of creating a to-do list item for Reply “Yes” to email, “Are you coming tonight?” That would be silly. So keep it simple and get stuff out of the way that will just overload your system. 3. _What actions need to be taken for this to be done?_ It’s not enough to say “Pool care” in your list. You have to have your list contain _actions_, not just stuff. The action might be “Call the pool care company to schedule a filter cleaning.” Or it might even be “Search for a pool care company to help me with filter cleaning.” _It’s not \_stuff_, it’s actions.\_ 4. _Add the actions into [Checkvist](http://www.checkvist.com)._ I add whatever comes to my mind to finish what the item in my inbox represents into Checkvist. At first, I recommend just having a _To Do_ list in Checkvist and make it complicated later. You are dumping everything into that program because that is going to be your one-stop-shop for getting things done. 5. _Archive the item._ It doesn’t stay in the inbox. Archive it. Move it to a folder. Whatever. But you will never see this again unless you need to. In this system there is a clear distinction between _what is processed_ and _what needs to be processed._ Some more notes about this system: _I process my items from the top down, with few exceptions._ This is important because in this workflow, you aren’t _doing_ anything; you’re just processing your items. If you think, “I don’t want to get to this because it will take too long to do.” Well in that case, add the item to your list! I have an @Actions folder for these situations, so I know I can go to the email and write in a long response. _My email inbox has zero items in it at some point every day._ You think I’m crazy, I know. Believe me, once you do this workflow, you won’t ever go back. I’ve been told by people higher up than me that the email gets unbearable and that my workflow is impossible. I’d like to think that it isn’t. I’ve already started forwarding emails to people on my team with short responses: “Yours” or “Let me know if I need to do anything.” Most threads I’m copied on aren’t important. So I think that this should apply to everyone, and you _have_ to know what it is you need to get done in your life to have sanity in any way. The key to processing is translating _Stuff_ into Actions. I can tell when someone really knows GTD by the items on their to-do list. “Set up blog” becomes “Buy domain on BlueHost” and “Set up Genesis Theme on blog” and about fifty other things. How much faster do I burn through the fifty specific, actionable things, than my “Set up blog”? Honestly, for me “Set up Blog” goes nowhere, because “watch episode of Biggest Loser” is much more actionable. Do you think inbox zero will work in your situation? If not, why not? --- # Collect for Sanity URL: https://hedge-ops.com/posts/collect-for-sanity/ Discover how to declutter your mind and life with our effective collection system. Learn to organize your thoughts, tasks, and worries for later, freeing you to truly focus. We are inundated with stuff. Stuff we need to act on. Stuff we need to remember. Stuff we need to worry about later. Most of us are stressed out with the amount of stuff in our lives. What to do about it? The system outlined in Getting Things Done starts with collection: there needs to be a system in your life for putting things that need to be processed later. So when you’re driving down the road and think of something, what is your system for collecting that for later? When you get a bill in the mail, but don’t want to pay it, what is your system for that? Without this system your mind will always be reminding you about everything. And you will never find peace. You will constantly be running from thing to thing and never able to say to yourself, “Relax, I’ll get to it later.” Because your mind knows you won’t, because you don’t have a system for it. Here is where I collect things to be processed later: - _Inbox at home._ All my mail, bills, go here - _Inbox at work._ All my physical meeting notes, receipts go here - _Evernote._ all my digital meeting notes go here - _Gmail Inbox._ All my personal email goes here - _Outlook Inbox._ All my work email goes here If I want something to be processed in my life, they _must_ go here. If I put them here, I can forget about them, because I have a system to deal with them. One thing I haven’t figured out a system for yet, but want to: daydreaming away from my computer. For this system to _really_ work, you have to empty your mind of all your stuff into a system. Anything that you ignore will come back over and over again. In the process phase you can say _no_ to those things, but the key is that you tell your mind, “Forget about it. It’s in the system. Trust me.” This frees you up to _truly focus._ Right now I have no system for when I’m thinking in the car and realize, “I need to organize my closet.” Normally I forget about it by the time I arrive at my destination, and my mind gets stuck in a loop. The obvious solution to this would be my phone, but a lot of times it’s locked and difficult to get to. Another solution might be a portable recording device. I think I might experiment with that. Has anyone come up with a good system for dealing with these times when you’re away from your inboxes or computers? --- # Planned Thinking URL: https://hedge-ops.com/posts/planned-thinking/ Discover the power of planned thinking in boosting productivity and reducing stress. Learn how the Getting Things Done (GTD) system and tools like Checkvist can transform your work and life. Follow along for practical tips and insights. In 2012 our efforts to add automated integrated testing to our central Point-of-Sale product were showing great results. We [were finding 20—40%](/posts/measure-for-reality) of all defects found in the software, many of which were found within hours of those defects being introduced by the developer. It took years of hard work to get to this place, and I was feeling good about our accomplishments. But there was a problem. I was running everything on adrenaline. I was working extra hours, reacting to everything, and doing the next best thing to be done. But I was neglecting other huge things. It was review time and my boss wanted me to take my capabilities to another level. His advice was among the more valuable I’ve had in a review: “You need to stop at least once a week and think about things. You just need to stop and think.” My goodness, he was right. And I knew better than this. Years ago I read the book Getting Things Done (GTD for short), which revolutionized how I did things. My problem was that I hadn’t developed a system for GTD principles. In the last year and a half I’ve focused on growing this area. I’m so excited to share with you my system, because I think it’s a great one that illustrates the concepts in GTD that I have only recently taken to the next level. If you read the book you get a good overview of the system. I don’t believe you really understand it until you see a system in practice. The center of my GTD system is [Checkvist](http://www.checkvist.com). I met the creator a Checkvist a little over two years ago at Jetbrains when I was discussing TeamCity with them. Kirill was taking notes in Checkvist and copied me on it. I didn’t take the tool seriously until after the review at the start of 2013. This tool is the single biggest reason for my success in the last 18 months. I can’t imagine life without Checkvist. In fact, this blog is managed through Checkvist and helped me create a workable plan that I could execute on. There are five phases of Getting Things Done that I will cover in detail in four posts: - _Collect:_ how do you take the things in your life and put them in places that you know you’ll get to when you have time? - _Process/Organize:_ Now that you have time, how do you take the things you have collected and put them in a system that you trust you will come back to later? - _Review:_ what is your system for coming back to the things you’ve put in your system? - _Do:_ when you’re going to do something, what is your system for knowing what is best? If you haven’t yet checked out Checkvist. I encourage you to do so and follow along in the next couple of weeks to get yourself organized. I’m here to help! --- # How Growth Got Me Organized and Productive URL: https://hedge-ops.com/posts/productivity/ Michael shares how intentional growth transformed his productivity using the GTD method and Checkvist. Emphasizing the power of continuous learning and organization. > If you keep learning and growing every day over the course of many years, you will be astounded by how far it will > take you. As I read this in the introduction of John Maxwell’s book [12 Indisputable Laws of Growth](http://www.amazon.com/gp/product/1599953668/ref=as_li_qf_sp_asin_il_tl?ie=UTF8&camp=1789&creative=9325&creativeASIN=1599953668&linkCode=as2&tag=hedgeopscom-20), I realized how low my expectations were. At first, I didn’t believe Maxwell. My focus at the time was more on keeping it together and making the _next logical step_ than on reaching a destination I didn’t previously think was possible. But I took Maxwell’s advice and have focused on growth ever since. I’ve read books. I got a mentor. I pressed into my work and made things happen. I built my team, so it isn’t just me anymore having to do the major things; it is _we_. After two years of intentional growth, I can see Maxwell’s wisdom firsthand. I have grown so much in these two years and feel like I’ve only begun. The biggest area of growth is with my _productivity_ and _organization_. The other day I reviewed everything that was going on and took an action on one of my many projects. I went to talk with person, who was on the phone, so I stopped to say hi to a friend of mine. He was questioning out loud how much my team had taken on and how it seemed impossible. Two years ago it would have been impossible. But now it is working, because I have focused on growing as a leader and because I have focused on creating a system within which I can manage what needs to happen in my life. This week I have six different projects at work going at the same time (and one project itself has three different subprojects). At home, we are going through an amazing amount of change that is planned based on our newfound desire for [contentment](/posts/achievable-contentment). On the surface it feels like I have five different jobs. Fortunately, I have a secret weapon I use to keep my sanity: [Checkvist](http://www.checkvist.com) and the Getting Things Done methodology. This will be the focus of the next couple of weeks. But for now, open your mind to the possibility that if you intentionally grow over time, you’ll be able to do things you previously thought weren’t even possible. All that’s needed is that you start. --- # July Blog Update URL: https://hedge-ops.com/posts/july-2014-update/ Explore the latest July blog update where I share my journey in Taos, New Mexico, my reflections on past blog series, and plans for future content. Join me on my quest for intentional, contented living. This week I’m in Taos, New Mexico with my family and have been thinking a lot about things. I thought it would be a good time to share with you more about where I’ve been, where I’m going, and what you can do to help me. ## A Retrospective When I planned out how I would start my blog, I planned five different series of posts, four of which have been published: _Introduction:_ This shares some of my core values and lets you get to know me a bit. I naturally gravitated toward [contentment](/posts/achievable-contentment) and the [appropriate definition of success](/posts/failure-masquerading-as-success). These topics have still remained very interesting to me and were fun to write. I think these posts are probably the closest to where I think this blog is headed. _Lessons Learned on my Install/Diagnostic Utility project:_ This coincided with speaking about it. These posts weren’t very popular; the [safety net](/posts/safety-net) post barely hit the top twenty. I had thought that there might be some more interest in the work-related sphere for lessons learned/etc., but there hasn’t been. _Finances:_ Next I wrote about our financial story, which has played a big part in my character development in the last five years. This series was the most popular one, largely because I shared it on the [You Need a Budget forum](http://forum.youneedabudget.com/discussion/31368/success-story-posted-on-my-blog). It was fun to write, especially when I write about the philosophy and mindset _behind_ making good financial decisions. I don’t read very many people writing about that, and think I may have insights in that area that can help people. The most popular posts in this series are also the most popular posts in the first two months: [the introduction to the story](/posts/failure-the-catalyst) and [how I got a month ahead with YNAB](/posts//month-ahead). _Lessons learned on my long-term Autopilot project:_ This coincided with speaking about it to our User Summit in Huntington Beach. The lessons I’ve learned have been valuable to me and I enjoyed sharing them, but my audience shrank during this series. This highlights a key surprise from the first couple of months: I haven’t been able to get any traction or response from what I do professionally. If I talk broadly about what I do professionally, people enjoy it, but the more detailed I get the less response I get from it. That’s a strange reality that I want to understand more deeply, but I won’t do an entire series related just to a work environment before I know that it will be well received. _Getting Things Done:_ The next series I will write about how [Checkvist](http://checkvist.com) has helped me implement the Getting Things Done system and has increased my productivity to previously unimaginable levels. I’m really excited about this series because using this tool has created a breakthrough for me that has truly been revolutionary. ## My Purpose I write this blog for four reasons: 1. I want to create a network of people who want to grow and intentionally live a better life. 2. I want to grow as a writer and a speaker because I enjoy helping others win. 3. I want to help the vendors who have made me successful, giving back to those who have given so much to me. 4. I want to explore which of my ideas and passions resonates with an audience, so I can know which paths my career can take me. So far I have learned a ton and have enjoyed the work that has gone into this. ## The Future I can already tell by how this first two months have gone that I want to write more about intentional, contented living. I’ve read a few books on this and will unpack it in August and beyond. I’m seeing by the response that these types of posts are the most helpful to people, and they are the most fulfilling for me to write. ## What do I want from you? 1. _Subscribe to my email updates._ This is on the right side of the posts and is the easiest way to be a regular reader. 2. _Tell people about posts you like._ It’s really encouraging to see responses in social media/etc., so the more you do this the more you support what I’m doing and encourage me. Use the icons on the bottom of each post to easily do this. 3. _Comment/Respond._ If you’re an early reader of my blog, and you like what I’m writing, you’re an important part of what I’m doing. If you like something, please tell me through a comment or [contact me](/contact). If you don’t like a particular post or idea, tell me that too! It’s helpful to be able to hear from you. I’d love to hear from you one way or another: _What about the first couple of months have you enjoyed the most? What would you like to read more about?_ --- # Solve Problems by Isolating Them URL: https://hedge-ops.com/posts/solve-problems-by-isolating-them/ Combining problems complicates solutions. Simplify by isolating issues, whether it’s project tasks or choosing between Legoland and Six Flags. My kids have been drawn into the world of Legos. They love [Ninjago](http://www.lego.com/en-us/ninjago) and [The Lego Movie](http://www.imdb.com/title/tt1490017/). It turns out there is a [Legoland Discovery center](http://www.legolanddiscoverycenter.com/dallasfw/) nearby. The other day they went to a birthday party there. The kids had an absolute blast. Legos, legos everywhere. It was totally awesome. It also turns out that we have a [Six Flags](https://www.sixflags.com/overtexas) nearby. Fast roller coasters. Every amusement park craziness imaginable. We spent last summer doing that, and let me tell you, mom and I really got sick of it. But the kids absolutely loved it. They even wanted to do it again this summer. So DFW area: lego land: check. amusement park: check. [lego amusement park](http://california.legoland.com/)? That will be a trip to California and $4,000 please. It’s interesting how the solution to a problem gets a lot more complicated and a lot more expensive when you combine it with another problem. I come across this a lot at work. I ruthlessly go through a project and eliminate anything that isn’t needed, because I want to ship it as quickly as possible and I know every little thing just adds to the time and complexity and makes the project that much more unrealistic. So are we going to the LegoLand California Resort? Only if we have to. :) --- # Immunity URL: https://hedge-ops.com/posts/immunity/ Build problem immunity with systems that prevent recurrence. While not all issues vanish, consistent efforts lead to growth and unexpected solutions. We finally figured out a bedtime routine. But the kids were late to school every day. We figured out how to sit down every night for dinner. But we were spending too much on groceries. It’s a reality we all face: no matter what you do, [some other problem comes up to wreck everything](/posts/failure-the-catalyst). And the same was true for my project that was dedicated to improving the quality of our software. When we started, we focused mainly on making sure there was no problem with how an order was created, the receipt was printed, and the financials were calculated. We had prioritized this correctly; getting these three things right is critical to our success. Once we had ramped up our solution, we saw a _lot_ of problems that we were able to catch before shipping our latest releases to anyone. We saved ourselves and our customers a lot of headaches that our competitors and their customers were likely suffering through. But over time, those problems decreased and the system seems to now have an immunity to these types of issues. It’s not a phenomenon I totally understand, but is one I’ve heard of from others in the industry. Our newfound immunity also hadn’t made us immune to _every_ problem. The problems we _weren’t_ focused on were still there. We hadn’t focused on how software is installed at the site and that the correct environment was set up. So the software could be rock solid, but if the environment is off, we’re in trouble. It will likely never end, but it’s a fun journey to iteratively create immunity through automation in the system. My advice with problem-solving is create a system that makes you immune to having the problem again. So get a month ahead of income so you never overdraft. Or put all your bills on autopay, so you never miss one. Or every Friday take your spouse out for dinner. But don’t expect all problems to go away entirely. Keep working at it, and you’ll grow beyond where you imagined, solving problems you didn’t even think you were capable of taking on. It’s a wonderful journey. --- # Ten Takeaways from the Last 10 Years at Radiant/NCR URL: https://hedge-ops.com/posts/ten-takeaways-from-the-last-10-years-at-radiantncr/ Reflecting on 10 years at Radiant/NCR: Value creation, embracing tools, prioritizing sales, and gratitude have been key. Always focus on delivering real solutions. “Five thousand, four hundred and thirty-two dollars and twenty-three cents” “Yes…” “Five thousand, four hun…” “Yes, we will take your home next Tuesday if you don’t pay us five thousand, four hundred and thirty-two dollars and twenty-three cents.” “I’m…” “I’m sorry there’s nothing else we can do for you” “Yes, you will lose your house next Tuesday if you don’t pay us fi….” “Yes, thank you.” This was where I had found myself in the spring of 2004. For some reason, I thought it was a great idea to accept a developer position at a Law Firm that specialized in foreclosures. In some cosmic twist, they sat me in the cubicle next to the lady who was delivering the bad news to people. All day. I had to get out of there. But to where? A recruiter had told me about this place that was a [_real_ software company](http://en.wikipedia.org/wiki/Radiant_Systems) that created a POS>for restaurants called Aloha. I was done being the IT department. I wanted to be a part of a _real_ software company. So on July 12,2004, ten years from tomorrow, I started working at Radiant Systems. I arrived ten years ago a mid-level software engineer whose confidence was shaken by all the foreclosures I had to hear about, and a rough few years of post-Y2K job market. Looking back, I’ve grown so much. Here are ten takeaways from the last ten years at Radiant/NCR: 1. _Speak Up._ Early on I developed an opinion on what needed to change about my team’s situation, and I spoke up and did something about it. This fueled even more change, and gave me a reputation for leadership, which opened up so many opportunities. 2. _Make a List._ When I started, I worked with a friend that always made lists, and I thought that was strange at first. But then I realized my colleague always got stuff done quicker than me. So I made a list, and got stuff done as well. People aren’t organized because they are dorks that like inane details; they’re organized because it works. 3. _Do it Right, But Do it Fast._ I’ve been passionate about doing things well for a long time. But I realized early on that the only way to do this is to get fast at developing software. I needed to take advantage of tools that I had available to me and get smart about getting things done, so I could have time to get it right. 4. _Tools Matter._ I don’t need to be a super-genius to be effective. Really I just need to use the right tools, and let my teams and company for that matter use those tools to be more effective. That way I win, everyone wins, and we can be more focused on the problem at hand. That’s why I love tools. It’s a win for everyone. 5. _Don’t be Religious about Process._ Early on I was very religious about being agile, doing test-driven development, whatever. I’ve realized over the years that it’s much more valuable to use those frameworks as starting points to solve the problem in front of you. If you solve problems, you get things done, and create a profit for your company. It seems so elementary that one should make more money for one’s company than they cost, but the honest truth is the degree to which you do that is the degree to which you will have flexibility on everything else: money, what you work on, flexibility. 6. _Be Patient._ I can think over the past ten years of so many times when things weren’t going well, or where I wanted something to change so badly. Eventually it did. Like I say above, if you create value, that value is rewarded. So focus on the value, not on the drama or the desire to change everything overnight. 7. _You Live with the Past Forever, so get the Present Right._ I am at a conference in Huntington Beach this week and was at breakfast overhearing two of our users argue about the proper approach to a weakness in a product…that I created seven years ago. It was a strange feeling that I had so much impact on these people so many years later. What I’ve learned over the years, is that you have to live with the past for a long time, especially when you move projects and can’t influence the product directly, so get it right today because you never know what tomorrow will bring. 8. _Value is Immune to Change._ In the last ten years, I’ve been through the worst recession ever, an acquisition from Radiant to NCR, and numerous other business cycle downturns and upswings. In every one of those, I’ve thrived. Why? Because I’ve focused on bringing value to my employer beyond what they pay me. If one is valuable, change doesn’t matter. Even if the company folds, that value can be transferred elsewhere. So I don’t worry about change; I worry about value. 9. _Without Sales, Software is Dead._ There is a sales guy at this conference who is the center of attention. He’s laughing, drinking, yucking it up with the customers. And they love it. For years, the software engineer in me despised it. “He doesn’t know the first thing about software” I would tell myself. I’ve learned recently though that without sales and marketing, software is only an idea that dies quickly. Software _needs_ to be sold, and that usually happens by people who actually had friends in High School. Sales is a valuable aspect of software and must be rewarded. 10. _Show Gratitude._ I’ve had my share of ups and downs in the past ten years. But when I look back, these ten years have completely changed my life. I couldn’t have gotten there without John Pearson, Vince Severns, Jeff Hughes, Jimmy Fortuna, and Honza Fedak helping me through, and the numerous team members and other leaders who have believed in me. I’ve come a long way since I was grimacing next to the lady explaining the foreclosure to the poor soul on the other line. I’ve been very open with people about how great this journey has been for me. --- # Problem Owner is Solution Owner URL: https://hedge-ops.com/posts/problem-owner-is-solution-owner/ For solutions to be effective, they must be in the hands of those facing the problem. Understand your audience’s needs for true traction and success. The room was standing room only. I was playing _[Welcome to the Jungle](https://www.youtube.com/watch?v=o1tj2zJ2Wvg)_ as loud as my company-issued laptop would play. There was a considerable buzz in the room. I was [speaking at a conference session](/speaking) entitled _The New Diagnostic Utility_. This wasn’t a Get Rich Quick with Flipping Real Estate conference session. This was The Diagnostic Utility. What was so jarring about this experience was the lackluster response I had gotten to this tool up until this point. The idea came [from a colleague](https://www.linkedin.com/in/nicolemillspmp) that dealt with diagnosing environmental issues in sites. She and her team were excited about it. I understood the need. But no one else seemed to be enthused. Why not? The conference I was at was full of people who install our software at restaurants all over North America. They had issues that needed to be diagnosed. The title _The Diagnostic Utility_ was translated to them like _Save Yourself Hours of Time Making Sure You Did Everything Right_. This was the [safety net](/posts/safety-net) they needed. Their attendance and enthusiasm confirmed it. This is probably the most valuable lesson I’ve learned this year: a solution is only effective when it is _in the hands_ of the one that has a problem. Not the one that _knows_ about the problem. Not even the one who is _losing money_ on the problem. The one who has the problem. So when your kid doesn’t want to get to school on time, the solution is waking up earlier, but it’s only effective when your kid _wants_ to be on time to school to avoid a consequence. When a project is proposed but the people who own that process don’t believe there’s a problem, you don’t do the project. When you’re getting frustrated that a customer isn’t responsive enough to your solution, perhaps they don’t see there is a problem, and perhaps you need to find another customer. That was the case for The New Diagnostic Utility. Once I found the right audience, everything fell into place. --- # Measure for Reality URL: https://hedge-ops.com/posts/measure-for-reality/ Success requires tracking and sharing key metrics. Without measurement, even great achievements go unnoticed. Regularly evaluate and communicate progress. Buffalo Rib-eye, medium at [Reata](http://www.reata.net/fort-worth-restaurant.html) with a glass of cab. That’s what I get when I’m ready to celebrate. We had finally sold our house and knew we needed to jump on the next one. On the first day of looking, we found a house with a lot of potential and decided we wanted to make an offer. When the offer was accepted, we got babysitting for the kids and headed to Reata to celebrate. Then I got a text. Look away, look away! Another one. My wife is more important than this phone. Another one. A friend of mine wanted me to come work with him. The offer was very attractive and tempting. But I didn’t take it for a number of reasons, one of which was that I felt like my work wasn’t finished at my current job. Then the next two weeks were hell at work. Negativity. Failure. Struggle. Wondering to myself if there will every be anything but negativity, failure, and struggle. I was fed up and needed to get honest with myself, so I went to [Esparzas](http://www.esparzastexas.com/home) and mapped out how I could get out of the situation I was in over a few margaritas. Let me let you in on a little secret of mine: every three months, I go to a restaurant, have at least two margaritas, and write out what I’m happy about, what I’m not happy about, and what I’m going to do about it. This particular day I was not happy about the fact that I turned down a great offer and didn’t have a wildly successful project at the time that made that decision feel worth it. The problem I uncovered that day over a few margaritas was that we were doing some great things but those great things weren’t _measured_ and _reported on_. So to outsiders, especially senior management, those great things didn’t exist. What I needed to do was measure the outcomes we were creating, and then share those measurements with the stakeholders on the project. That would turn _is this ever going to work?_ into _this is working, but they have a few issues right now._ In a few months, we created a daily report that showed the project’s output _every day_ for the runs that happened _every day_. This was a game changer for my project, and for my job. Now I look for any way to measure what outcomes I’m creating because I know no matter how good the outcome is, if it isn’t measured, it doesn’t exist. To wrap up the story, I made the right decision with staying with my company. The project just needed some advertised regular measurement. Once that was in place everything changed. And my friend left that job six months after he made that offer, due to fighting over which direction to take their product. A year after that the project he was on was cancelled. --- # Embrace Difficulty URL: https://hedge-ops.com/posts/embrace-difficulty/ Face challenges head-on for success. Embrace daily routines to overcome difficulties, turning daunting tasks into manageable ones. Confront, don’t avoid. It was an impossible project and I was scared. The fact that [the world was ending](/posts/christmas-with-teamcity) was the least of my concerns. We have software that is so flexible and configurable that it was impossible to fully test all the combinations of options our customers could run. My leadership at the time asked us to mitigate this by recording everything that happened at a restaurant and playing it back internally on prereleased software to make sure everything behaved the same. I can’t overstate how I felt: this problem scared the crap out of me. So much could go wrong, and so many issues to figure out. How would I get the software started? How would I know when/if the simulation was running correctly? What about all the other automation projects I had heard about over the years that had been cancelled due to lack of results? Would this project (and my career with it) be the next one thrown on the scrap heap when management realizes how impossible it is? I had to step back, step away from my fear, and think of a good strategy. We were facing a difficult problem. The process we would come up with was likely to fail. A lot. Instead of running _away_ from that failure, we needed to embrace it. We needed to welcome it with open arms. Because if we didn’t face the failure head on, _we would never get past it_, and we would fail. So we created a system where we ran any automation we had _every day_. In fact, this is how we do it today, with over 150 restaurants running in a virtual environment and over 4,000 small scenarios. We do it every night. Do we have to do it every night? Technically no. But we embrace the difficulty of it by doing it every night, so we get quick feedback of the problems and keep it on track. People still argue with me over whether we _have_ to run this every night. I’m fine with that; I know it seems silly at times. But I think it’s key to our success: we embrace the difficulty by doing difficult things all the time, so we can learn how to deal with them and make them not difficult anymore. Here are a few other examples of embracing the difficulty in a system: | When | Embrace Difficulty By | | ------------------------ | --------------------------------------------------- | | Doing Laundry | Doing it every day | | Budgeting | Starting every month and facing reality | | Blogging | Keeping a month ahead and posting at a regular pace | | Learning to Cook | Cooking a regular meal on a day of a week | | Keeping the Family Close | Eating dinner together every night | What have you avoided that you need to embrace in order to overcome it? --- # Engineering Travel URL: https://hedge-ops.com/posts/engineering-travel/ Shift from consumerism to contentment by rethinking travel. Opt for walking, biking, and consolidating car trips. Embrace local living for a healthier, happier life. This year our family has been [focused on contentment](/posts/lowering-expenses-with-contentment) over consumerism. It all started with realizing that our children believed the lie that stuff was going to make them happy. Where on earth would they learn that behavior? It was probably from that school they went to. No, as we looked further they learned it from us, because we were living that way as well. When we started focusing on contentment, we realized it was a part of our whole lives. This wasn’t about just saving money on groceries and not eating out. Everything we did was a topic of conversation. And thus, the topic of travel went up, and we went through a bit of a transformation on how we see it. As I did in [an earlier post](/posts/engineering-laundry), I’ll walk you through what I did and hope you get some ideas about how you can find more contentment in your travel choices. ## Inherited System The system we had gone with was similar to what most people in suburban middle-class America do: 1. _We drove wherever we wanted._ If we needed to go to the store, we drove. If we needed to go to Maryland, we drove. There was never a question _if_ we should be using a car. We drove. 2. _Travel was a fixed expense._ Travel expenses (gas, repairs, insurance) to us were almost the same as our mortgage payment. We just accepted the amount and moved on. 3. _Fuel Economy was a medium priority._ When we shopped for cars, we took fuel economy into consideration, but we didn’t make it a priority. We had two cars that would fit our entire family in them, but of course we didn’t need that. We did have parts of our system that were a bit different from the norm: 1. _We live and work locally._ We are less than a ten-minute drive from my work, our church, the kid’s schools, and grocery shopping. This was a huge priority to us when we moved a couple of years ago. 2. _We drive used, paid off cars._ We’re OK with the car not being a status symbol. We’ve always driven non-luxury, used cars. These last two elements are critical components for the new system outlined below. ## Problems with the System When we started looking at our lives and budget holistically, we saw a few problems with the system we had been living by: 1. _We had a sedentary lifestyle that was setting us up for health problems._ I had a desk job. Anytime we wanted to go _anywhere_ we went and sat on a metal-encased couch that launched us down a paved road. I was gaining a few pounds every few years. I went to the gym sometimes, but it was hard to fit it into my busy schedule. 2. _We had a consumptive mindset._ I’m convinced that [a key to contentment](/posts/achievable-contentment) must be to think of yourself as a producer instead of a consumer. If you feel _entitled_ to consume, then nothing is ever good enough. When I produce as much as possible, the times when I do consume are wonderful, gracious experiences. Our system made us 100% consumers of our transportation, via the car. ## New System Our new system makes us producers as much as possible and leads us to a healthy lifestyle: 1. _We walk or bike within five miles of our home._ We don’t allow ourselves to get in a car if we need to go down to the grocery store; we bike. And, a magical thing happens: we get exercise! It’s such a transformation when fitness becomes a part of your life rather than a scheduled activity. What about the kids? They bike too! We have to focus on safety the whole time, but we do and they make it. 2. _We consciously use the car_. When we are going to use the car to go somewhere _more_ than five miles away, we ask ourselves, “do I need anything else?” We try to consolidate trips because _getting in the car is a special activity_. If I’m going to launch a couch down a paved road, I better have some good reasons for doing so. 3. _I periodically work from home._ A day or two a week, I work from home. We’re creating an office for me, but for now I work at the kitchen table, or if the kids are there, I ride my bike down to the library. This helps me get focus to think about things, but it also keeps everything local and manageable. 4. _I bike to work._ I have been biking to work 80% of the days I work there. 5. _We have one car._ I’m selling my Camry, and we’ll only have a Sienna minivan. We really only need to take the family around in one car, so why do we need two? We don’t! I absolutely love this new system. It has to be wrapped up in a mindset that convenience and comfort aren’t the most important things in life. This is definitely a more inconvenient and uncomfortable system. But those weren’t our goals; contentment and production were our goals. Living and working locally and minimizing our use of the automobile has left us more contented and productive than ever! --- # Increase Income by Negotiating URL: https://hedge-ops.com/posts/increase-income-by-negotiating/ Boost your income by negotiating your salary. Chapman’s book teaches that everything is negotiable, emphasizes intentionality, and the power of silence. When one [gets serious](/posts/move-the-needle-with-dave-ramsey) about meeting their financial goals, the obvious immediate focus is [to lower expenses](/posts/lowering-expenses-with-contentment). If you want to get out of debt, stop going to Starbucks. And find a way to be content with not having it anymore. But there is another side of the equation as well: your income. Back in 2010 when I shifted my personal finance perspective, I decided to get serious about my income as well as my expenses. And I thankfully found the book [Negotiating Your Salary: How to Make $1000 a Minute](http://www.amazon.com/gp/product/0931213207/ref=as_li_qf_sp_asin_il_tl?ie=UTF8&camp=1789&creative=9325&creativeASIN=0931213207&linkCode=as2&tag=hedgeopscom-20&linkId=W6BC6IOM726IEVJK) by [Jack Chapman](http://www.salarynegotiations.com/). The book changed my thinking in three important ways: _Everything is negotiable._ Stop looking at things as fixed. They aren’t. You need to create more value for your company, and they will be willing to pay you more. Negotiate for the value that you are creating, and by all means _work your butt off_ to create the value. But don’t just passively go through life accepting whatever is given to you. Negotiate. _Be intentional._ Don’t let performance reviews happen to you. Be intentional. Be clear about your goals and work with management to accomplish them. Don’t just accept what they’re telling you, but press in and make sure that they respect you. _Stop talking._ When negotiating, the one who talks is the one who is losing. Be simple about your goals and stop talking. This is the part that I struggle with the most. [I can’t stop talking](/speaking). This turned a perceived static situation I had no control over into a dynamic situation where I increased my value to my employer and in return over time increased my income. It takes a while, but if your goals are realistic and if you work hard, you’ll get there. I know as a manager I love managing people who are intentional and motivated to work hard to create an outcome. I’ll always take that person over an unmotivated lazy person who is fine with their salary. --- # Lowering Expenses with Contentment URL: https://hedge-ops.com/posts/lowering-expenses-with-contentment/ Discover how adopting a mindset of contentment can help you lower expenses and meet financial goals. Learn from our journey with YNAB and Dave Ramsey. When [YNAB](/posts/you-need-a-budget) and [Dave Ramsey](/posts/move-the-needle-with-dave-ramsey) entered our lives, we became an unstoppable force to meet our financial goals. Our original debt payoff forecasts were beat by a mile, largely because we got serious and focused all of our time, energy, and money to debt payoff. We had met some really great goals and were feeling great. But there was a problem. After we met our debt payoff goal, the angels did not descend from the heavens and make everything better. And there was still an insatiable appetite for more stuff. The only difference was that now I had more money to spend on getting that stuff. It was like we had finished a long diet but did nothing about our love of ice cream. So we got more stuff for a few months. And I felt the same. Dave Ramsey has a saying, “Live like no one else, so that later you can live like no one else.” We had lived like no one else, so that later…we would get depressed about how meaningless it all is. Here’s the problem: you’ll never win with discontentment. You never find enough. You’re always wanting more. So in the long run, the _sacrifice now_ will not lead to peace and balance later, because you’re training yourself for discontentment. Let’s illustrate this in other areas of life: | With | Live like no one else looks like | But contentment looks like | | -------- | ------------------------------------------------------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------- | | Dieting | I’m going to join crossfit and do that multiple times a week, and eat Paleo and judge everyone else who doesn’t as idiots | I’m going to learn how to enjoy the calories I really need, appreciating that I’m swimming in a sea of cheap calories that kings of old didn’t have | | Work | I’m going to make this happen and work Saturdays and drive everyone to do the same…;this is going to launch my career into the stratosphere | I’m going to really think about what and how I’m doing things, and make sure I do the right things with the right team, and do those things well | | Marriage | We’re going to go to counseling, a marriage retreat, and then sit down every night and talk | I’m going to give her a break and serve her and try to not control her | | Money | If I get an extra job, I can pay off my debt and get an expensive car ten years from now | I don’t need an expensive car. Or all this stuff. I’ll stop buying stuff and find that I have a lot of money left over that I can save. And wow, I’m happier! | Sometimes the middle column is called for. But only temporarily, because discontentment leads to discontentment. It’s a never ending cycle. And the only way to get to where you want to go is through contentment. You may need to spend a few months moving the needle. Great. The focus though should be on finding contentment in your life, and creating a margin of time, money, and focus that will help you accomplish your goals long-term. --- # Move the Needle with Dave Ramsey URL: https://hedge-ops.com/posts/move-the-needle-with-dave-ramsey/ Dave Ramsey’s extreme approach to debt focuses on psychology over numbers. His method emphasizes short-term sacrifices for long-term financial freedom. We had [a failure that was the catalyst to change](/posts/failure-the-catalyst) and [a tool that would help us change our relationship with money](/posts/you-need-a-budget). Now we needed to get out of debt. We needed to get serious. We needed [Dave Ramsey](http://www.amazon.com/gp/product/1595555277/ref=as_li_qf_sp_asin_il_tl?ie=UTF8&camp=1789&creative=9325&creativeASIN=1595555277&linkCode=as2&tag=hedgeopscom-20&linkId=O5HAQUNJLACSUUTX). Full disclosure: our household has kind of moved away from some of Dave Ramsey’s ideas since 2010 when we went all-in with his program. I’ll get into why in the next post, but for now I want to write what I appreciate about him and how he presented personal finance to us in a way that got us serious about getting out of debt. Dave Ramsey doesn’t do debt. He hates it. To him, the only reason one should get into debt is to buy a house, and he doesn’t even fully recommend that one. Most people think he’s extreme. Ramsey’s genius is that when it comes to being in debt, being extreme is exactly what is called for. The numbers aren’t the reason people aren’t getting out of debt; the psychology is. So when you get out debt the Dave Ramsey way, you sell everything. You don’t eat out. You think about it every day. You work extra. Maybe an extra job. You get sick and tired of being sick and tired. The principle here is _Move the Needle_. Sometimes it makes sense to get crazy and move the needle in some area of your life. Make a change happen, possibly at the short-term sacrifice of other things, so you don’t quit. Once you accomplish something, quitting it is no longer an option. So if you’re needing to make a change and gut it out in the short term in order to create the environment where you can win in the long, term, have at it! I know with getting out of debt, this is the only way to go in order to actually do it. If you try to nickel and dime your way out of debt, you’ll never get there because you’ll get distracted and go after something else. When we got serious and decided to move the needle with getting out of debt, all the pieces fell into place, and we got out of debt really quickly. The problem is when this becomes a way of life. At some point, you’ll need to stop moving the needle. More on this in the next post. --- # Month Ahead URL: https://hedge-ops.com/posts/month-ahead/ Discover how to break free from living paycheck to paycheck with our guide to getting a month ahead in income. Experience the benefits of financial simplicity, peace, and clarity with our practical steps. You’re at the mall, it’s the 12th of the month, and you want to buy sunglasses. Do you have the money? If you’re living paycheck to paycheck, it depends on whether the mortgage will come before or after your paycheck. It might also depend on your checking account balance. This is normal for many people. This is also insanity. The best thing [YNAB](http://ynab.refr.cc/C9FV2R2) did for us is it encouraged us to get a month ahead in income (they call this [Rule Four](http://www.youneedabudget.com/method/rule-four)). Paychecks we receive in June pay July’s expenses. But for most stuck in the paycheck to paycheck rat race, the idea itself seems like a fantasy. We were there, definitely, but we got obsessed with getting a month ahead because we wanted to experience its benefits of simplicity, peace, and clarity. Here’s how we did it: 1. _We Took Advantage of Bi-Annual Three Paycheck Months._ Luckily, January 2010 was a three paycheck month. We budgeted every month for two paychecks, so we were already halfway to our goal the very first month. If you’re paid biweekly, every six months is a golden opportunity to get ahead. 2. _We Drew a Line in the Sand._ Before we started using YNAB, [we had used our credit card](https://hedge-ops/posts/failure-the-catalyst) and were in the process of paying it off. Normally that would have been our top priority, but instead of working on that, we made those off budget accounts and set them to the side and paid the minimum. This rerouted some cash that was planned on paying off a credit card balance to being a month ahead. This is how I recommend approaching YNAB: what’s in the past is in the past; focus on creating a system that will work for your future and everything will fall into place. 3. _All Margin Went to Rule Four._ We had a few hundred dollars a month that we had extra every month. Sometimes it would go to something fun or to meet one of our goals. When we implemented YNAB, _all_ of our money went toward this one goal. 4. _Moved End of Month Bills to the Next Month._ We paid our car insurance every month. I called the insurance company and had them move the bill to be due from the 28th to the 2nd of the next month. To them, it’s a few days. But in YNAB, it gets you out of the current month and into the next month…and closer to being a month ahead. 5. _Planned for a Lean First Month._ We put everything into getting a month ahead, so it was OK if we didn’t have the _full_ set of monthly income for our first Rule Four month. We just needed to cover the necessities, and let a month pass. At that point we would be a _full_ month ahead on income. This is the time to have one of those _I won’t buy anything_ months! 6. _When we were close, we cautiously overrode Rule Three._ Over the years I have really grown to appreciate the wisdom of [Rule Three](http://www.youneedabudget.com/method/rule-three): every month start over and budget the money you have available. Don’t worry too much about making every month perfectly balance. When you override Rule Three by carrying a negative category balance, you run the risk of having to focus on your account balances, and you lose many of the benefits of the YNAB system. When we were just getting started, and within hundreds of dollars of being able to make it through February on January’s income. We had to prepay our portion of a family vacation that would happen in the summer. Rather than wait another month or two to meet our goal, we had that category carry over a negative balance for the next month. We had done the steps before so there was no risk that our balances would get too low. Now that we’re a month ahead, if we’re at the mall, and we want to buy something, we check the category balance. We don’t care about when the mortgage payment will go through, when I get paid, or the checking account balance. All of those variables have been removed because we are spending last month’s money this month. Now that we have clarity on just focusing on the category balance, we put our energy into making better decisions. And that for us has made all the difference in helping us go wherever we want to go financially. --- # You Need a Budget URL: https://hedge-ops.com/posts/you-need-a-budget/ YNAB, a user-friendly budgeting software, emphasizes goal-setting, real-life adaptability, and planning for surprises. It’s a game-changer for personal finance. I experienced [a financial tailspin](/posts/failure-the-catalyst) that got my attention enough to make some serious changes in my life. I needed to assemble a team that could help me meet my goals. A friend of mine had told me about budgeting software that he had loved that fit with how I liked to budget. I hadn’t taken [You Need a Budget](http://ynab.refr.cc/C9FV2R2) (YNAB, pronounced why-nab) seriously until then, but now was the time to dive deeper into this tool to see if it would fit my needs. What I found was a great software tool created by a team that cared more about serving people and changing lives than making money. This is a critical piece of being on my team: you must be more dedicated to serving others than to enriching yourself. And the small team that created YNAB was dedicated to service over self. This software has changed my life in so many ways; I feel indebted to them to do whatever I can to make them successful. So what is so good about this software? A lot, but I’ll focus on four aspects: _Simple Enough for Everyone._ This was a requirement for me. I needed [Annie](/about/annie) to be involved, and she isn’t a finance guru. YNAB presents budgeting in such a simple way that non-finance types can understand it with [minimal training](http://www.youneedabudget.com/support/training-and-education) and make it happen. Annie’s involvement in our goals has been critical to our success. _Focus is on the Right Goals._ In other budgeting programs, you’re focused on account balances or fitting your life in the system. YNAB has great systems for keeping you focused on where you want your money to go, and forgetting about the timing of bills or keeping everything balanced perfectly. So now we think about what our goal is and how to meet that goal, month to month. And then we forget about it and live our lives. _It’s Reality Focused._ I was helping a friend set up YNAB, and he said to me, “I want to wait until next month to start this, so I can buy a couple of iPhones.” I informed him that buying iPhones was as simple as allocating hundreds of dollars into a category and then spending it. YNAB doesn’t force you into a perfect situation or month. Those don’t exist! You simply tell it what you want to do with your money. That can even include blowing a lot of money on electronics; it’s up to you! _It Plans for the Unexpected._ it’s not expecting you to perfectly plan every cent and then rush to the software whenever something changes. It lets you easily change your plan, or just wait for the next month, where the available money will be adjusted to how well or bad you did the month before. A solid system must plan for noncompliance to the plan. Four and a half years later, YNAB is helping us meet our goals just as it did in the beginning. It is our constant companion through life, always helping us bring together our intentions and our actions. If you’re looking for something to help you meet your financial goals, you owe it to yourself to look at [YNAB](http://ynab.refr.cc/C9FV2R2). --- # Failure the Catalyst URL: https://hedge-ops.com/posts/failure-the-catalyst/ Failure can be a catalyst for positive change. While setbacks sting, they often pave the way for growth and success in various life aspects. “We don’t take credit cards, only cash.” It was November 2009, and I was locked into _Project: Have a Third Child_. Part of the deal I made with [Annie](https://www.hedge-ops.com/about/annie) is that we could offset some of the hardships of having another child by doing some things for ourselves. This translated partly into changing our half bath into a full bath and completely redoing the furniture and decorating of our master bedroom. As with most home projects, reality quickly surpassed our budget. That was OK. I was going to use my credit card. Never mind that I promised myself I would never use it in this manner…;this totally frivolous remodeling project was an emergency! I would pay it off, I promised. I just needed to get through this. [My plumber](http://www.viperplumbing.com/) dug a hole in our foundation, extended the toilet drain to what would be the shower, and charged me a hefty sum to do it. And he only took cash. As did [the tile guys](http://mastertilesetter.com/) and [the painter](http://paulhedgpethpainting.blogspot.com/). Crap. The inevitable occurred. I overdrafted our checking account and our finances went into a tailspin that took us a few weeks to get out of. My wife and I awoke to how totally out of control and useless our current financial system had been. Some major changes were on the horizon. But these changes had a beginning in failure. At the time failure seems so terrible, so awful, that nothing good can come out of it. But in reality failure is often a catalyst for change. | When you fail by | it is the catalyst for | | ------------------------------ | ---------------------------------------------------------------------------------------------------------------------- | | losing your job | putting yourself on the right path to meet your goals, with the options wide open | | a marital separation or affair | finding or abandoning your true commitment to that person, which means there will no longer be a lukewarm relationship | | a project failure at work | understanding what will not work, so you can pursue what will work either for you or your organization | | gaining weight | reanalyzing your relationship with food and an active lifestyle and making changes | Failure feels terrible at the moment, but it really is a wonderful blessing because it is the only catalyst I know of for real success. I haven’t yet been able to believe this enough to make failure suck any less, but it sure is nice to know while I’m going through it. --- # How I Applied Engineering Skills To Laundry URL: https://hedge-ops.com/posts/engineering-laundry/ Learn how to apply engineering principles to simplify your laundry routine. Discover a new system that reduces sorting, encourages ownership, and brings a sense of completion to this never-ending chore. As we grow as engineers, we start to see how engineering principles relate to [all areas of life](/posts/life-is-art). Implementing principles globally helps all areas grow. When [Annie](/about/annie) asked me to take over the laundry, I decided to approach the problem as an engineer: ## Inherited System The system I inherited went something like this, for our family of five: 1. Each room on the second floor has a clothes hamper (all of our sleeping rooms are on the second floor). When cleaning the room, put the clothes in the clothes hamper. 2. If you’re on the first floor, put the clothes in the laundry room which is on the first floor. 3. Once a week, on no particular day, take all the clothes from the rooms and pile them in the laundry room. 4. Sort the clothes into loads of whites, colors, and towels. 5. Wash and dry the clothes (this takes two days). 6. When the clothes are dried, put the clothes that are to be hung up flat on top of the dryer. Put the clothes that are in dressers in a bag that hangs on the wall, one for each person. There is also a bag for kitchen towels and bathroom towels. 7. At some point take all bags upstairs, and put the clothes away in dressers. 8. At some point hang up the clothes. This entire process would begin as it was ending. In other words, laundry ended up being a never-ending mess of never-doneness. ## Problems with the System I looked at the system not as a household chore but as a system that I could maximize using principles I use at work. So looking at it that way, what were the problems with this system? - _Excessive Batching._ Everything was piled up and done at once, and there was no flow. When there is so much work in progress, you can’t optimize the system because there are too many variables in it. It’s like trying to cook Thanksgiving Dinner…all at once. There has to be a clear process in place and simplicity at every step for any hope of true optimization. - _Excessive Sorting._ Laundry was sorted at least two times within a large set. There was a separation of _all the laundry_ into loads and another separation of _all the laundry_ by who owned the laundry. This was especially difficult with the kids, whose sizes are remarkably similar and ever-changing, even though they insist it’s totally obvious that the shirt belongs to one or the other. - _Lack of Ownership._ My wife was doing all the work. None of us wanted to do the job. So we were leaving valuable contributions from me and my sons on the table, which led to… - _Despair from Lack of Closure._ The system didn’t give you a sense of being _done_. In software terms, there was no _release_. It was just always going. ## New System I did some internet searching and [came across an article](http://lifeasmom.com/2013/04/kids-can-do-laundry.html) that was extremely close to what I have implemented. It addresses all the problems stated above. Here’s the system: 1. Every person in the house gets their own laundry basket except for the parents who share theirs. Every person is responsible for putting their own clothes into _only_ their own basket. 2. There is a basket downstairs that towels and linens go into. 3. Every basket is done by its owner on a particular day of the week. Someone helps too if that’s needed. Our schedule 4. is: | Day | Basket | Owner | Helper | | --------- | -------------- | ------------ | -------------- | | Monday | Parents | Mom | Dad | | Tuesday | Oldest | Oldest | Dad | | Wednesday | Towels | Dad | Mom | | Thursday | Middle Child | Middle Child | Dad | | Friday | Youngest Child | Dad | Youngest Child | ## Benefits of the New System We’ve done this system for a week now, and wow has it made a difference! Here’s why: - We’ve broken the whole system down into smaller, manageable chunks. That way there is a sense of progress, completion, and lack of despair. Since the laundry room is clean (other than a basket with towels), it is a place where you can create an outcome relatively quickly and get out. You’re done. There’s not a big scary laundry monster in there to kill you. - Sorting has been drastically minimized. Now when we wash the youngest child’s clothes, it’s extremely clear whose clothes are when they come out of the dryer. This knocks off the total time we spend on it. - Everyone owns laundry. It’s not something that only Mom does. And she gets to do the part that is most important to her (her own clothes), so we win by me not accidentally shrinking her brand-new sweater. The problem that one might have with this system is that you are doing laundry _every day_. But a different person is in charge of it, and it’s a manageable amount. It seems to be working out well for us so far. The great thing I found through this process is how well management principles relate to so many other areas of life. That’s one of the things I want to explore in the future: taking wisdom from one area of life and applying it elsewhere. --- # Surprise URL: https://hedge-ops.com/posts/surprise/ Prioritizing the grand vision can overshadow the power of surprise. In strategy, unexpected moves can be game-changers, leaving competitors scrambling. In [the last post](/posts/the-grand-vision) I wrote about a mistake I made where I focused too much on The Grand Vision and not enough on solving small problems. At the time, I felt that accomplishing The Grand Vision was going to be awesome and would save the world, and would mean that there would be a parade for my entire team, and cheering, and wonderful speeches given for us all. It will be glorious, except, now I realize I totally screwed it up: I left no room for surprise. Surprise is when someone’s expectation is suddenly shifted into a totally new place in an instant. Surprise has a way of changing the game immediately and causing your competitors to scramble. When your competition expects an outcome, they have time to spin it as no big deal. Your flaws (and there will be flaws if you ship it) will be the surprises, not the big game-changing outcome that you created. People adjust to the _new_ reality over a period of months, when in fact the reality hasn’t even changed. This lesson will be central to everything I do for the rest of my life. If I am trying to create an important outcome that makes a difference, I should focus _first_ on creating the outcome. If it is as important as I think it is, surprise will be my ally. --- # The Grand Vision URL: https://hedge-ops.com/posts/the-grand-vision/ Improving processes requires balance: while having a grand vision is essential, focusing on immediate, impactful solutions is equally crucial. Late last year I started a new project that was dedicated to improving our delivery process and tools. I spent a few weeks talking to people throughout our organization about what the problems were and how we can best address them. And then I came up with _The Grand Vision_. The Grand Vision was an awesome elevator speech where I drew quadrants and arrows, talked about the main problems I was seeing and how to bring them together into one, totally awesome, unified structure that Would Save The World. In one of the meetings, a key leader I respect said to me, “That’s ambitious.” “Thanks,” I replied. _It wasn’t a compliment_. The lesson I learned is that it’s important to map out the process and see how what you’re doing fits into the entire goal. _But it’s also important to solve problems immediately that have a quick return on investment._ This kind of thinking happens all the time: | Ambitious | Should be | | ------------------------------------------------------------------------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------ | | I’m going to get rid of all sugar in my house and go _gluten free_ | On Fridays I eat donuts that people bring in the office. In June I’m not going to eat any. | | I’m going to get a gym membership with personal training and wake up every day at 5:30AM and get _ripped_! | I’m going to bike anywhere within five miles of my house and do 20 push ups a night | | We are going to a full week marriage conference in Aspen, Colorado so we can get our marriage back on track | Let’s go on a date and not look at our phones | It’s so easy to avoid the really tough problems in front of you by dreaming a big dream. I think many of us like to think if we can’t do it all, then why do it at all? The real battle is won when we drop that all or nothing thinking and get something done…_today_. When have you let the grand vision distract you from what really needed to be done right then and there? --- # Safety Net URL: https://hedge-ops.com/posts/safety-net/ Improving software updates for restaurants is challenging. The key? Before risky changes, build a safety net to ensure quality and desired outcomes. Late last year I began seriously working on improving how we deliver updates to our software to restaurants. One of the most interesting parts about my job is how many aspects of technology become incredibly difficult when you have a Chef twenty feet away instead of a data center technician in a lab coat. Updates are no exception: the operation of the restaurant itself is at stake, and we must get it right. So how do you improve something like that? Believe me, this was something that was keeping me up at night. The problem is the huge amount of risk involved in anything going wrong in the restaurant operations, but at the same time the huge operational benefit improving it. The insight we found was that before making a change that has risk, build a safety net to ensure that the change will have the desired effect. Build quality into the system, and you’ll be able to make the needed changes to the system. Otherwise, you’re dead when the first problem hits, and you’ll never recover. This insight relates to so many areas of life and business: | When doing | create safety with | | -------------------- | ---------------------------------------------------------------------------------------------------------- | | Software development | automated unit and integration tests | | A budget | living on last month’s income | | A healthy marriage | going on a date and having fun together | | A college class | a study group | | A business idea | seeing if it can be profitable on the side | A safety net is a critical aspect of any system I create today. The bigger the risk involved, the more I strive to include safety in the solution we create. What safety nets have you created in your solutions? Have you ever created _too much_ of a safety net? --- # Generosity with the Unexpected URL: https://hedge-ops.com/posts/generosity-with-the-unexpected/ Setting aggressive yearly goals is vital, but unlisted objectives like fostering team efficiency with tools like TeamCity and git and promoting generosity drive true success. Every year we sit down and create a specific set of goals for my projects. The [goals are always aggressive](/posts/measure-for-reality), and every year we wonder to ourselves how we are going to do it. Most years we spend the first few months trying to come up with a strategy for even making the goals possible. This is by no means an easy endeavor. And [achieving the goals](/posts/achievable-contentment) on this list is extremely important to me. But it doesn’t tell the whole story. What is not on the list is just as important to my future as what is on the list. For the past five years _Getting a team set up on [TeamCity](http://www.jetbrains.com/teamcity/)_ has never been an explicit goal. Neither has _introduce and administrate [YouTrack](http://www.jetbrains.com/youtrack/) to increase the maturity and efficiencies of teams._ Or _introduce [git](http://git-scm.com/) as a superior version control alternative to TFS and Subversion._ But to me, these things that don’t make the list are major contributors to my success because they foster an attitude of generosity. I don’t believe success happens without generosity. Plans never explicitly state “be generous to others and solve problems.” But those who follow this path end up being supported by those whom they served, being served in return. I think this is a secret to my success: [serve others generously](/posts/christmas-with-teamcity), which builds a community of generosity of which I am also a recipient. In contrast, the one who is stingy and focuses only on that which will advance one’s own interests will end up hitting a ceiling of productivity. At some point those around her are alienated, aren’t growing enough to contribute at higher levels, or aren’t properly engaged in the vision of growth that is required for success to be achieved. I have a standing invitation for anyone to put thirty minutes on my calendar to get Continuous Integration, issue tracking, or distributed version control set up on their project. Even though it isn’t on the list of my explicit goals, it is my pleasure to make the world a better place, and, in turn, it indirectly helps me reach my own goals. --- # Releasing Control URL: https://hedge-ops.com/posts/releasing-control/ Entering marriage counseling with expectations, I learned from Les Carter that true success isn’t about control but serving and understanding others. “Thank God we’re in marriage counseling so my wife can finally get her crap together.” A thought that most people have when they enter the marriage counselor’s office. I came into the office that day of [Les Carter](http://www.drlescarter.com/), with [my wonderful wife](/about/annie), ready for some change. I had a list of things. She was this. She was that. Why can’t she just…? As we started I heard a similar tone from my wife. She had a similar list, where only the subject had changed. He is this. He is that. Why can’t he just…? “OK, OK,” I thought. “I can play this game. So the one with the highest score at the end wins. I will _definitely_ have a higher score at the end of this one, honey, don’t you worry.” But Les wasn’t playing that game. Les gave us a nugget of wisdom that is now a mantra for us: > The more you try to control, the less you are in control. The less you try to control, the more you are in control Les was touching on a key element of success: it’s not about creating the perfect level of control, so you can have everything your way. It’s about serving, helping, and loving others. It’s about making others around you successful. Somehow when this happens, you end up having more success and control than ever. But it’s never about the control itself. This can be applied in many ways: | When you want | You must | | ---------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | | a good marriage | Chill out and let your spouse have some space | | To be a good parent | Give them freedom to be themselves, which does include structure, but also includes having their own personalities and interests | | A project at work to succeed | Enable those around you to be the best they can be and believe in them as you let them work | | Financial security | Accept that there will be ups and downs and you won’t know when those will be, so it’s best to be a long-term investor who doesn’t bail whenever the market is in trouble | --- # Failure Masquerading as Success URL: https://hedge-ops.com/posts/failure-masquerading-as-success/ Chasing the ideal college experience, I took on debt for a year of fun, only to realize true success impacts all life areas positively. I didn’t have the grades I should have had in high school. My parents didn’t have anything saved up for college when I turned 18\*. By living at home, under my mom’s insurance, and getting support from my dad, I was able to end my Junior year of college with a few thousand dollars of student loans and no credit card debt. But that wasn’t enough. I _needed_ to have a successful college life. I began believing that there was a real risk that I was going to look back at my college life as a failure. People in college are supposed to experience community, friends, fun, unfettered learning. Living with mommy and having a job was seriously impeding those goals. I had only one year to make things right. So I did what any idiotic 20-year-old would do: I quit my job waiting tables at an upscale restaurant in Dallas, moved into a dorm, and maxed out my student loans and credit cards to make it happen. During the next fourteen months I had a lot of fun. I hung out with interesting people, was in walking distance to most of my life, and expanded my mind through books and interesting classes. Based on the terms I had set out for myself, the year was a success. I’ll be honest with you: I spent many years paying off the tens of thousands of dollars I borrowed in that fourteen-month period. The years of debt repayment that followed brought home an important truth: > What felt like success for those fourteen months was really failure masquerading as success. When success is real it flows to all areas of life, not just on the area that has the focus. It also flows into the future, not just the present. > When financial success turns a healthy, compatible, and loving marriage into a hate-fest, _that’s failure > masquerading as success._ > > When success at work turns colleagues from respect and honor to anger and disdain, _that’s failure masquerading as > success._ > > When success in marriage creates isolated, ignored children, _that’s failure masquerading as success._ > > When success on my project this quarter leads to years of rework and confusion, _that’s failure masquerading as > success._ I believe success does have tradeoffs. There are _failures_ that always _accompany_ success: having dinner with your family might mean someone else who doesn’t need to do that will get promoted instead of you. But I don’t believe that a wise definition of success has collateral damage. I believe a life of peace and balance is possible. Anything else quickly becomes failure masquerading as success. - What they did do though is tell me over and over again how important it was for me to go to college, something I am thankful for to this day --- # Achievable Contentment URL: https://hedge-ops.com/posts/achievable-contentment/ In a quest for career growth, many chase titles like ‘Software Architect’. Yet, true success isn’t about the next promotion but finding contentment. “You’re doing great here, and you’re an asset to what we’re doing. We think you have a bright future with us.” My boss was obviously happy [with my performance](/posts/ten-takeaways-from-the-last-10-years-at-radiantncr) and was telling me about it in no uncertain terms. “That’s great, and I appreciate it, but when will I get promoted to Software Architect?” I wanted more than anything at that point in my career to be a [Software Architect](http://money.cnn.com/magazines/moneymag/bestjobs/2010/snapshots/1.html). The title comes with respect, a great salary, and a leadership position within a software development organization. Most people who knew my goal never questioned its efficacy. …But I now see that it was the wrong goal. A good goal is one that delivers what it promises: contentment and happiness. The contentment I envisioned after getting the promotion very quickly became discontentment wrapped around another, bigger goal. All of a sudden I wanted to be a Senior Software Architect. And then more. And more. This is insanity. There is a better way: goals that lead to contentment. Some examples: | You’ll never be content with having | But you will be content with | | ----------------------------------- | -------------------------------------------------------- | | The next promotion | Doing work that matters and being fairly rewarded for it | | A big raise | Spending less than you make, whatever it is | | A new luxury car | A car you can afford | | The spouse of your dreams | A marriage based on love, acceptance, and peace | In the left column, everything seems so deceptively simple. “All I want is a Mercedes.” Well, yes, but what happens when you get the Mercedes? What _then?_ The left side is one in which you never end up at a destination. You are always striving, always anxious, always gunning for the next thing. The left column is a series of steps in life that all follow a commonly accepted pattern of “going for the next thing”, but arriving nowhere important. These goals ultimately lead to misery and despair once one inevitably finds this out. What’s strange is that no one questions this path to success, even though there are so many examples of burned out, depressed, unhappy people who have followed it. A better definition of success is on the right side. These are achievable goals, not in a few years but right now. The goals aren’t as measurable, and those around you won’t notice them as much as they would a new Mercedes. However, they deliver what they promise: achievable contentment. Success is achievable contentment through goals that have an end to them. The end of a truly success-oriented goal isn’t _another goal_ It is contentment and peace. What are your goals? Where will they lead? --- # Christmas with TeamCity URL: https://hedge-ops.com/posts/christmas-with-teamcity/ In Christmas 2008, amidst global uncertainty, I utilized my vacation to set up Continuous Integration via TeamCity, transforming our product’s development. It was Christmas 2008, and the world was going to end. We didn’t know if there would be an economy or even civilization. And I had two weeks of vacation to end the year. I had an 18-month-old who was mostly occupying himself with Christmas toys, and [Annie](http://www.hedge-ops.com/about/annie) was two months away from having my second child. I didn’t really want to take the vacation, but the policy at the time was _use it or lose it_, so I took it. I could have done anything with those two weeks. I chose to set up [Continous Integration](http://martinfowler.com/articles/continuousIntegration.html) for one of our largest products through [TeamCity](http://www.jetbrains.com/teamcity/). This was something I was passionate about. I had [read the literature](http://www.amazon.com/gp/product/B0026772IS/ref=as_li_qf_sp_asin_il_tl?ie=UTF8&camp=1789&creative=9325&creativeASIN=B0026772IS&linkCode=as2&tag=hedgeopscom-20&linkId=RJ6US3SXFLCWDTR5) on how transformative Continuous Integration had been to organizations. This product was built twice a day by a homemade tool called `bmcon.exe` and some batch files. If the build broke, dozens of people stopped everything to try to get it working, with no clear feedback mechanism for knowing what went wrong, who did it, and whether it was being worked on. It was my moral duty to fix this. And it so happened that those working on TeamCity were going to take their Christmas holiday…[on January 7](http://en.wikipedia.org/wiki/Christmas_in_Russia). They were Russian. So I took it upon myself to monitor the email and get the build working over the Christmas holidays. I remember on Christmas day [I was conversing with](http://youtrack.jetbrains.com/issue/TW-6471) [Eugene Pentrenko](http://de.linkedin.com/in/jonnyzzz) across the world about how to deal with the complexities of TFS pulling thousands of files and then building them\*. Years later, almost all of our products were built with TeamCity (though today we have moved to [GitHub Actions](https://github.com/actions)). It was central to our journey of modern development. And it all started one Christmas years ago when I had a _moral duty_ to do something. In the book _[Selling with Noble Purpose](http://www.amazon.com/gp/product/B008KPM424/ref=as_li_qf_sp_asin_il_tl?ie=UTF8&camp=1789&creative=9325&creativeASIN=B008KPM424&linkCode=as2&tag=hedgeopscom-20&linkId=52YQPMBZ7Z4IMUKU)_, [Lisa McLeod](http://www.mcleodandmore.com/what-is-selling-with-noble-purpose/) leads the reader through an exercise where the reader thinks about situations where one makes a difference with customers, in a different way than other people, while loving what they are doing. When I went through this exercise I was reminded of this story. Through the exercise I found my noble purpose: ☞ I share tools and insight for success This is what truly excites me, and why this blog exists. I want to share the tools and insights I’ve found to succeed. I want to help those who have given me tools and insights that have made me more effective by spreading them to others. And I want to properly define success, so I can make sure to follow the path that will lead me there. In the next few posts, I’ll talk about key elements of _true_ success. Success is one of those things that seems easy to see in others, but never seems recognizable in ourselves. I think I’ve found a few reasons why this is. _You can’t see it in the issue I link to above but Eugene was emailing me and went above and beyond, after his normal hours, to resolve the issue._ ---