About Lisa Wallace

Lisa Wallace joined MSI in 2015 as a security focal and project manager, and became Technical Director in 2017. She is involved in internal and external penetration testing application assessments digital forensics threat intelligence incident response eDiscovery efforts She is responsible for scoping our efforts across all workstreams, as well as project and staff coordination and management. She has worked in a variety of fields, including utilities, financial services, telecommunications, and consulting in a number of ancillary industries.

That phone call you dread…

So, you’re a sysadmin, and you get a call from that friend and co-worker…we all know that our buddies don’t call the helpdesk, right?

This person sheepishly admits that they got an email that looked maybe a bit suspicious in hindsight, it had an attachment…and they clicked.

Yikes. Now what?

Well, since you’re an EXCELLENT sysadmin, and you work for the best company ever, you’ve done a few things to make sure you’re ready for this day…

  • The company has had a business impact analysis, so all of the relevant policies and procedures are in place.
  • Your backups are in place, offsite, and you know you can restore them with a modicum of effort – and because you’ve done baselines, you know how long it will take to restore.
  • Your team has been doing incident response tabletops, so all of the IR processes are documented and up-to-date. And you set it up to be a good time, so they were fully engaged in the process.

But now, one of your people has clicked…now what, indeed..

  • Pull. The. Plug. Disconnect that system. If it’s hard wired, yank the cord. If it’s on a wifi network, kick it off – take down the whole wifi network if feasible. The productivity that you’ll lose will be outweighed by the gains if you can stop lateral spread of the infection.
  • Pull any devices – external hard drives, USB sticks, etc.
  • DO NOT power the system off – not yet! If you need to do forensics, the live system memory will be important.

Now you can breathe, but just for a minute. This is the time to act with strategy as well as haste. Establish whether you’ve got a virus or ransomware infection, or if the ill-advised click was an attachment of another nature.

If it’s spam, but not malicious:

  • Check the email information in your email administration portal, and see if it was delivered to other users. Notify them as necessary.
  • Evaluate key features of the email – are there changes you should make to your blocking and filtering? Start that process.
  • Parse and evaluate the email headers for IPs and/or domains that should be blocked. See if there are indicators of other emails with these parameters that were blocked or delivered.
  • Add the scenario of this email to your user education program for future educational use.

If it’s a real infection, full forensics is beyond the scope of this blog post. But we’ll give a few pointers to get you started.

If it’s a virus, but not ransomware:

  • If the file that was delivered is still accessible, use VirusTotal and other sites to see if it’s known to be malicious. The hash can be checked, as well as the file itself.
  • Consider a full wipe of the affected system, as opposed to a virus removal – unless you’re 100% successful with removal, repeated infection is likely.
  • All drives or devices – network, USB, etc. – that were connected to the system should be suspect. Discard those you can, clean network drives or restore from backup.
  • Evaluate the end user account – did the attacker have time to elevate privileges? Check for any newly created accounts, as well.
  • Check system and firewall logs for traffic to and from the affected system, as well as any ancillary systems.

If it’s ransomware:

  • Determine what kind of ransomware you are dealing with.
  • Determine the scope of the infection – ancillary devices, network shares, etc.
  • Check to see if a decrypt tool is available – be aware these are not always successful.
  • Paying the ransom, or not, is a business decision – often the ransom payments are not successful, and the files remain encrypted. Address this in your IR plan, so the company policy is defined ahead of time.
  • Restore files from backup.
  • Strongly consider a full wipe of the system, even if the files are decrypted.
  • Evaluate the end user account – did the attacker have time to elevate privileges? Check for any newly created accounts, as well.
  • Check system and firewall logs for traffic to and from the affected system, as well as any ancillary systems.

In all cases, go back and map the attack vector. How did the suspect attachment get in, and how can you prevent it going forward?

What are your thoughts? I’d love to hear from you – lwallace@microsolved.com, or @TheTokenFemale on Twitter!

You backed it up, right?

Yes, folks…we’re back to basics here. Anyone think we’d still be talking about this in 2018? We are…

Our recent incident response work has brought this to the front of my mind. Think for just a minute about a company who has a business vs. technology conflict. They want their backups to be QUICK! So they put their backups on a NAS. Network attached storage.

Key word there – attached. Now, let’s role play that they have been hit by ransomware. They can restore their backups quickly…and now they’ve lost their backups quickly as well. How catastrophic would this be for you?

There are several things to think about when it comes to your backup strategy. First, what do you need to protect against?

  • Natural disasters. Onsite backups are convenient, but not terribly convenient if your whole building burns down. Are you in an earthquake zone? Tornadoes? Hurricanes?What kind of catastrophic happenings could you experience, and how far away do your backups have to be to be protected?
  • Risk from external attackers. Going back to our ransomeware scenario above, what’s the balance between ease of restoring backups vs. protection from harm for your organization?
  • Risk from internal attackers. We all want to trust our sysadmins. What happens if one of them is disgruntled? What safeguards are in place to protect your backups from internal threats?
  • Testing your backups. Periodically perform testing of your backups, both inside and outside of an incident response tabletop. Make sure that your backed up data really IS backed up, and restores in the manner you’d expect. This is a good time to create some baselines on the restore process, as well – what’s your time to restoration if a crisis happens?
  • Hot vs. cold disaster recovery systems. How critical is downtime to your business? If hours means millions, you should have – or seriously consider – a “hot” disaster recovery site to minimize downtime as you pivot over.

Backups are routine, and boring…and when things go well, they should be this way. Prepare yourself for the day things do NOT go well, eh?

What do you think? I’d love to hear what I’ve forgotten – reach out to lwallace@microsolved.com or @TheTokenFemale on Twitter.

Micro Podcast – Office 365

With today’s social engineering threats, every company should be evaluating the configuration and security of their Office 365 presence.
Microsoft has provided many robust feature to secure their Office 365 technology.  Many of these features are not enabled by default or they are not enabled by default or they are not enabled with the optimal settings.
For this reason, we created a podcast about potential issues and remediation strategies for Office 365, enjoy!

Sunshine on a “cloudy” day…

I recently saw an article targeted at non-profits that was a bit frightening. The statement was that small non-profits, and by extension many businesses, could benefit from the ease of deployment of cloud services. The writers presented AWS, Dropbox, DocuSign, et. al. as a great way to increase your infrastructure with very little staff.

While the writers were not wrong….they were not entirely correct, either. It’s incredibly easy and can be cost effective to use a cloud based infrastructure. However, when things go wrong, they can go REALLY wrong. In February of 2018, Fedex had a misconfigured S3 bucket that exposed a preponderance of customer data. That’s simply the first of many notable breaches that have occurred so far in 2018, and the list grows as you travel back in time. Accenture, Time Warner and Uber are a few of the big names with AWS security issues in 2017.

So, if the big guys who have a staff can’t get it right, what can you do? A few things to consider:

  • What, specifically, are you deploying to the cloud? A static website carries less business risk than an application that contains or transfers client data.
  • What are the risks associated with the cloud deployment? Type of data, does it contain PII, etc.? What is the business impact if this data were to be compromised?
  • Are there any regulatory guidelines for your industry that could affect cloud deployment of data?
  • Have you done your due diligence on cloud security in general? The Cloud Security Alliance has a lot of good resources available for best practices. Adam from MSI wrote a good article on some of the permissions issues recently, as well.
  • What resources do you have or can you leverage to make sure that your deployment is secure? If you don’t have internal resources, consider leveraging an external resource like MSI to assist.

Remember – just because you can, doesn’t always mean you should. But cloud infrastructure can be a great resource if you handle it properly.

Questions, comments? I’d love to hear from you. I can be reached at lwallace@microsolved.com, or on Twitter @TheTokenFemale.

 

Move over Intel – here comes AMD…

Following close behind Spectre, Meltdown, et al…CTS-Labs announced on Tuesday, March 13th that it’s researchers had discovered 13 new critical security vulnerabilities with AMD’s Ryzen and EPYC processors. The Israel based company presents the vulnerabilities as allowing attackers to not only access data stored on the processors, but would also allow them to install malware.

Of some note is the fact that it appears that CTS-Labs gave AMD less than 24 hours to respond to the vulnerabilities rather than the customary 90 day notice for standard vulnerability disclosure. As such, there is no readily available information from AMD.

Another item of note is that the domain name “amdflaws.com” was registered February 22, 2018. Presumably this belongs to CTS-Labs or an associate.

Ryzen chips typically power desktop and laptop computers, while EPYC processors are generally found in servers. A quick rundown of the vulnerabilities as presented as of this writing:

RYZENFALL – four variants, affects the Ryzen family of processors: This vulnerability purports to allow malicious software to take full control of the AMD Secure Processor. The resulting Secure Processor privileges could allow read and write in protected memory areas, such as SMRAM and the Windows Credential Guard isolated memory. This could allow attackers to bypass controls such as Windows Credential Guard to compromise credentials, and potentially move laterally through the affected network.

Attackers could also theoretically use this vulnerability in conjunction with MasterKey to install persistent malware on the Secure Processor.

FALLOUT – three variants, affects the EPYC family of processors: This vulnerability purports to allow attackers to read from and write to protected memory areas, such as SMRAM and Windows Credential Guard isolated memory (VTL-1).

Attackers could theoretically leverage these vulnerabilities to steal network credentials protected by Windows Credential Guard, as well as to bypass BIOS flashing protections implemented in SMM.

CHIMERA – two variants, affects the Ryzen family of processors: This vulnerability purports to have discovered two sets of manufacturer backdoors: One implemented in firmware, the other in hardware (ASIC). The backdoors allow malicious code to be injected into the AMD Ryzen chipset.

The chipset links the CPU to USB, SATA, and PCI-E devices. Network, WiFi and Bluetooth traffic often flows through the chipset as well. The attack potential for this vector is significant, and malware could evade virtually all endpoint security solutions on the market.

Malware running on the chipset could leverage the latter’s Direct Memory Access (DMA) engine to attack the operating system. This kind of attack has been demonstrated.

MASTERKEY – three variants, affects both the Ryzen and EPUC families of processors:  Multiple vulnerabilities in AMD Secure Processor firmware allow attackers to infiltrate the Secure Processor.

This vulnerability purports to allow the deployments stealthy and persistent malware, resilient against virtually all security solutions on the market. It also appears to allow tampering with AMD’s firmware-based security features such as Secure Encrypted Virtualization (SEV) and Firmware Trusted Platform Module (fTPM).

As in RyzenFall, this could allow attackers to bypass controls such as Windows Credential Guard to compromise credentials, and potentially move laterally through the affected network.

Another consideration is potential physical damage and bricking of hardware. It could also potentially be leveraged by attackers in hardware-based “ransomware” scenarios.

The full whitepaper is here.

Given the continued impact of the Intel patches on performance and stability, and conflicts with other vendor products – hardware and software – hang on, folks. We’re going to see some chaos in this space.

What are your thoughts? Do you feel the responsible disclosure path is to give manufacturers the customary 90 day window, or is immediate disclosure of risk preferable to you?

Let me know what you think. I can be reached at lwallace@microsolved.com, or on Twitter as @TheTokenFemale

Enter the game master….disaster recovery tabletops!

I snagged this line from the most excellent Lesley Carhart the other day, and it’s been resonating every since.

“You put your important stuff in a fire safe, have fire drills, maintain fire insurance, and install smoke detectors even though your building doesn’t burn down every year.”

When’s the last time you got out your business continuity/disaster recovery plan, dusted it off, and actually READ it? You have one, so you can check that compliance box…but is it a living document?

It should be.

All of the box checking in the world isn’t going to help you if Step #2 of the plan says to notify Fred in Operations…and Fred retired in 2011. Step #3 is to contact Jason in Physical Security to discuss placement of security resources…and Jason has changed his cell phone number three times since your document was written.

I’ve also seen a disaster recovery plan, fairly recently, that discussed the retrieval and handling of some backup….floppy disks. That’s current and up-do-date?

Now, I am an active tabletop gamer. Once a week I get together with like-minded people to roll the dice and play various board games.

For checking the validity of your disaster recovery plan there is an excellent analog to the tabletop gaming world:

Tabletop DR exercises!

Get BACK here….I see you in the third row, trying to sneak out. I’ll admit, I LOVE doing tabletops. Hello? I get to play game master, throw in all kinds of random real life events, and help people in the process – that’s the trifecta of awesome, right there. If it’s a really good day, I get to use dice, as well!

The bare minimum requirements for an effective tabletop:

  • A copy of  your most recent DR/BC plan
  • Your staff – preferably cooperative. Buy ’em a pizza or three, will you? The good kind. Not the cheap ones.
  • An observer. This person’s job is to review your plan in advance, and observe the tabletop exercise while taking notes. They will note WHAT happens, and what actions your team takes during the exercise. This role is silent, but detail oriented.
  • And the game master. The game master will present the scenario to the team. They will interact with the team during the exercise, and will also be the one who generates the random events that may throw the plan off track. It’s always shocking to me how many people would rather be the observer….to me, game master is where the fun is.

Your scenario, and the random event happenings, should fit your business. I tend to collect these for fun….and class them accordingly. A random happening where all credit card processing is doubling due to an error in the point of sale process is perfect for a retail establishment…but an attorney’s office is going to look at me like I have three heads.

Once the exercise is over, the game master and observer should go over all notes, and generate a report. What did the team do well, what fell off track, what updates does the plan need, and what is missing from the plan entirely?

Get the team together again. Buy ’em donuts – again, the good ones. Good coffee. Or lunch. Never underestimate the power of decent food on technical resources.

Try to start on a high note, and end on a high note. Make plans, as you review – what are the action items, and who owns them? When and how will the updates be done? When will you reconvene to review the updates and make sure they’re clear and correct?

Do this, do it regularly, and do NOT punish for the outcome. It’s an exercise in improvement, always…not something that your staff should dread.

Have a great DR exercise story? Have a REALLY great random event for my collection? I’d love to hear it – reach out. I’m on Twitter @TheTokenFemale, or lwallace@microsolved.com

Because you know it’s all about them apps, ’bout them apps…

Know thyself – Socrates

I ran across this link last week, from SANS, and it’s one of the better basic checklists I’ve seen for application security. With all due respect to OWASP, their information is more technical, and useful for practitioners – their testing guide is here. For the CIO level crowd, I’d highly recommend a look at their top 10 for 2017. And a serious nod to Bill Sempf – if you haven’t heard his talk about care and feeding of developers in the security space, go find it!

Since this missive was designed to have pretty pictures and convince you to send your developers to the SANS courses listed, it’s a nice start for security practitioners that may need to work with developers, but aren’t 100% versed in application security. Some of this info is more basic than OWASP’s, as well – which does not diminish it’s importance. Let’s talk about what they list here, and why it’s important.

Error handling and logging:

Don’t display the specific error messages generated by your programs/architecture, and don’t allow unhandled exceptions – both of these items can display information about the underlying architecture of your application. Attackers can leverage this information and any associated vulnerabilities to compromise the application. If the user creates a condition that generates an error, offer them enough information to fix the problem – nothing more, nothing less.

Don’t allow specific framework errors…”the X program says you broke Y variable” – suppress them. Allowing these errors discloses potentially useful information about the framework and architecture to attackers.

Log all the things! Log authentication attempts – successful or not. Log privilege changes – successful or not. Log all administrative activity, or administrative attempts. Log any and all access and access attempts to sensitive information.

Log all the things….except when you don’t. Don’t log sensitive information. Log the admin attempts, but not admin passwords. Don’t log any information that falls under HIPAA, PCI, or other regulatory spheres.

Store logs securely. Plain text in an internet facing share? Not the world’s best idea. Encrypt, secure, and protect against data loss and tampering. If you have a data retention policy, make sure that logs are included and the policy is followed.

Data Protection:

Turn ON HTTPS, turn OFF HTTP. The same URL should not be accessible via HTTP. Get your HTTPS certificates from a respectable CA – no self-signed certificates. Accepting them is bad practice, and you run the risk of the impression that you haven’t done your due diligence, AND of conditioning your users to bypass this simple security measure.

Disable weak ciphers. Don’t wait for the 4,732 vulnerability, and don’t argue that these vulnerabilities are difficult to exploit. The NEXT one might not be. Get your SSL sanitization house in order.

Don’t allow auto-complete. Yes, some browsers will ignore things – their bad practices shouldn’t be used to justify your bad practices.

Avoid storing user info. Tokenize when possible. If you have to store password, encrypt, salt, spindle, mutilate and fold. There’s no such thing as TOO safely here.

Operations:

Have a consistent, repeatable process for…application development, testing, change control. Include security requirements at the beginning of the design – don’t try to shoehorn them in after the fact.

Review, review, review. Code reviews. Design reviews. Security testing – as you go, not at the end. Harden the environment per best practices.

Train your developers on security! Work as partners, not as the guys who make stuff and those security guys that always say no.

Have an incident response plan. TEST your plan, evaluate your plan, use your plan. Do not wait til something DOES happen to discover the holes in your plan. Keep your plan updated, as staff contacts and responsibility changes. Do disaster recovery exercises.

Authentication:

Hard coded credentials. Don’t. Just don’t. But I need to because….no. You do not. There are safer ways to do this.

Have a strong password policy. Have a strong password reset – do not accidentally disclose things like the validity of an account via the password reset mechanism. Do have a password lockout policy – unlimited attempts is an invitation to a brute force attack.

Again, make sure your error messages aren’t handing valuable information to attackers.

Run applications and middleware with the least privilege required. Database passwords are gold – do not put them in code. Guard them. But I need to because…again, you do not. Do it right, don’t do it over.

Session management:

Put a logout button on every page. Every. Page. Then, invalidate the session once they’ve logged out – no back button resumption of the session.

Randomize your session tokens, so that they are not vulnerable to predictive attacks. Regenerate them as user permissions change. Unless the application requires multiple connections – and you have a legitimate need to DO this – destroy tokens in multiple sessions. Don’t leave yourself open to session cloning.

Cookies. And not the chocolate chip kind. Set the domain and path correctly. Use secure cookie attributes, and expire cookies as appropriate.

Log users out automatically on reasonable idle periods. Implement an absolute logout – there are few, if any, legitimate reasons to be logged in forever.

Input & Output handling:

Whitelist over blacklist. Only accept data that meets the criteria for your application.

Validate, validate, validate. Validate uploaded files – consider all uploads as suspect, and sandbox accordingly. Validate input sources.

Follow the OWASP recommendations, many detailed in the link above, for input, output, and safe transport.

Access control:

Apply access controls consistently. Use “gate keeper” technology, so that all requests are validated and verified, whether the user is logged in or not.

Don’t allow unvalidated forwards or redirects. This gives an attacker potential capability to access content without authentication.

Least privilege rules. Make access control mandatory, don’t elevate rights when you don’t absolutely need to. Don’t use direct object references to validate access.

There’s a lot more than I’ve include here….don’t understand these? Need more info? Talk to your developers. Buy ’em a burger. Buy ’em a beer. Become the guy who listens, and attempts to understand….not the jerk that always says no. If you make an honest effort to understand them, and to help them understand you, you’ll both be better for the attempt.

Got a development war story? Got a good development story? Please reach out – @TheTokenFemale on Twitter. Let’s keep the conversation going.

Scope….or, why can’t you just send me a form?

Scoping….the process of gathering data to put together a statement of work for a client.

To be 100% honest, I love scoping. And MSI doesn’t scope via form letter, although I’ve seen a variety of companies take this approach.

Is it because I want to talk to you? Well, partially – I do enjoy the vast majority of our clients. But here’s where I think the “fill in the form” plan fails.

First, when you’re not engaged in conversation, you’re viewing the client requirements with an eye towards putting a peg in the hole of one of your offerings. Even if that ends up to be a square peg in a round hole.

Second, the conversation often takes many twists and turns. As we talk about MSI, and our capabilities…it will happen that what a client asks for isn’t precisely what they need. We can offer a different service, and help them get to their end goal in a different way. And this isn’t always more services…it’s equally likely that it will be less, or a custom variation on a service we already have. The majority of clients don’t fall into “canned” services….and it’s refreshing to talk to them when they’re also engaging other vendors simply dropping them into a slot.

So the first question of any scoping conversation is – what is the purpose? What problem are you trying to solve? Is it regulatory – you have to have X assessment? Is something broken? Or are you trying to become aware of some security gaps – whatever they may be.

That’s the springboard of the conversation, helps us get to know you, and helps you to get the right mix of services. It’s personalized, customized, and based on individual attention from our sales and technical staff.

The next piece of serious hands-on attention comes when we’ve gathered the details for the engagement. Does the information provided make sense? If you’re a financial services firm, and you’ve chosen to be measured against HIPAA, is that really the right choice for you? The push-button approach may miss that.

Another item that’s fairly common is typos or inaccurate information in the network space provided. So we’ll do passive recon on the information provided. Does the IP space really belong to your company? Are you using hosting via AWS, which requires an additional penetration test form? Are you using a host like Rackspace that has additional contract stipulations on penetration testing?

Throughout the engagement, there are more personal touches. Via our project management portal, the engineers working on your engagement touch base every day, every other day as work progresses. If a highly critical issue is discovered, all work stops, and the engineers will get on the phone with you. We don’t believe in a situation where a critical vulnerability is only shared in a report, weeks after the discovery.

Now the reports are in your hands. We keep those reports for ~90 days – after that, all reports are purged from the system. During that 90 days, we can supply replacement copies – we can also supply the password used for encryption, if you’ve misplaced it. Sanitized copies of reports can be produced as well, for dissemination to vendors, clients, regulatory bodies, or any interested parties that you need to share this information with – a small fee may apply.

At the end of the day, the question is – who did you help today? It’s rare for MSI to end the day where we can’t answer that question in multiple avenues. It’s one of my favorite things, and we’d love to help you!

Spectre and Meltdown and Tigers, Oh my….well, maybe not tigers….

On January 3rd, three new vulnerabilities were disclosed. These vulnerabilities take advantage of how various CPU’s handle processing in order to return a faster result.

The technical details for Spectre and Meltdown are addressed by the papers linked to their names above. And some POC’s from the Project Zero team.

A few observations on how the industry is addressing this issue…and a few points of interest that I’ve found along the way. First, let’s note that the CVE’s for these are 2017…when in 2017? We don’t know. But the catchy domain names were registered around the third week in December, 2017.

The full vendor matrix at CERT – this is always worth watching, and there are some useful tips for cloud implemenations via Amazon and Microsoft Azure:

Operating system manufacturers:

Apple

  • Will release updates for Safari and iOS in coming days. Some speculation that iOS on Mac’s that is 10.13.2 or higher has some protection from one or more variants – not verified
  • https://support.apple.com/en-us/HT208394

Windows

Linux

Some antivirus solutions are causing blue screens after application of these patches:

This is particularly interesting to me – the browsers. I did not expect to see the browser patch bandwagon to be as rapid as it has been:

Firefox

Internet Explorer

Safari

  • Will be addressed in approximately the same timeframe as Apple iOS patches – current ETA unknown

Chrome

The long and short. Is the sky falling? Probably not. If you have solutions that are hosted with a cloud provider, check in with them. What are their recommended mitigations, and have you implemented them? In an enterprise environment, do your due diligence on patches. Patch in your test environment first, and research your antivirus solution for potential impact.

And I believe I’m paraphrasing the excellent Graham Cluley. Calm down, make a cup of tea – although mine is salted caramel coffee. Patch during your normal cadence for critical patches, and keep the ship afloat!

Office 365 – all your stuff belongs to…who?

We’ve had a surprising number of incident response engagements involving Office 365 lately, and I’d like to discuss some best practices to keep you from an incident. There are also some actions that should be taken to allow effective investigation if you should suspect a user or resource is compromised.

The single most important thing that would have kept most of these incidents from occurring? Enable multi-factor authentication. Period.

Yes, I know. But our users complain! It’s a hassle! It’s an extra step!

Let’s consider carefully. Look at each user in the organization. Consider what they have access to, if their credentials are compromised. Look at these resources in your organization:

  • Exchange
  • All Office 365 documents
  • Sharepoint

Weigh out the user inconvenience vs. the loss of any and all data that they have access to….and discuss with your risk and compliance staff.

Still with me? Here’s how to enable multifactor authentication in Office 365.

If you are on a hosted Office 365 infrastructure, your service provider should be ready and willing to help with this if you do not have access in place to enable the option.

Next, let’s talk about investigating an actual compromise. Microsoft has some fairly robust mailbox audit capability for user access, etc. And…it’s not turned on by default.

Crazy, you say? Just a bit!

First, you need to turn the options on – instructions to do that are here.

Then you need to enable it for mailboxes. Instructions to do that are here.

Please note that this second step requires Powershell access – so if you are in a managed Office 365 environment, your service provider will likely need to assist. (and don’t take no for an answer!)

There are a number of other options that are useful for fine-tuning the spam and malware settings, enabling DLP, and other useful things that are not on by default – or not configured for the most optimal settings.

Would you like an audit of your Office 365 environment? Our engineers can help you fine tune your settings to optimize the available options. Reach out, we’d love to help.

Questions, comments? Twitter – @TheTokenFemale, or lwallace@microsolved.com. I’d love to hear from you!