Tuesday, October 22, 2013
Labels:
firewalls
,
malware
I was reading an interesting article over on LifeHacker attempting to make the case that outbound firewalls are worthless. For those that don't know what an outbound firewall does, its purpose is to block programs from making internet-bound connections without your approval. The Windows firewall presents something similar to the image below when this occurs:
Read More
Outbound Firewalls: Prevention is the Cure
Are "Outbound Firewalls" worth it? (Short answer, yes)
I was reading an interesting article over on LifeHacker attempting to make the case that outbound firewalls are worthless. For those that don't know what an outbound firewall does, its purpose is to block programs from making internet-bound connections without your approval. The Windows firewall presents something similar to the image below when this occurs:
Image Credit: zolierdos |
The main point of such operations, if you think about it from a perspective on trying to control malware, would be to block the malware from making a connection to the Internet without your knowledge. This assumes that you've already gone through the stage of visiting a compromised link, resulting in the installation of malware on your machine. Blocking malware at that stage is reactive, not proactive. See below for the associated illustration of this process:
Instead of waiting until that point, why not prevent it in the first place?
That's where prevention works in your favor. By preventing your browser from sending a request for that compromised or malicious link in the first place, you reduce the likelihood of infection.
Does that prevent you from getting infected from borrowing someone's malware-infested USB drive? Of course not. If you do have this prevention mechanism, and if you get infected from another vector such as the USB drive, then unless the prevention method is circumvented the malware will still get blocked when phoning home to its command and control servers. It's at this stage that the prevention mechanism would operate similar to a firewall blocking outbound Internet connections from untrusted programs.
Sold! Now what do I do?
There are many programs that performs this prevention aspect. Most "Internet Security Suites" from vendors such as McAfee, Kaspersky, and Trend Micro have web browser plug-ins that analyze each URL (website link) individually and will block known bad URLs. In my opinion though, to know that a link is bad requires one of two analyses:
1) Someone visited the link and became infected, at which point the vendor program detected the infection and reported the URL back to the vendor mothership for distribution to all other endpoints so those other endpoints are now protected.
2) A brand new malware link is established that no vendor has seen before. A vendor program installed for protection analyzes the content on this brand new link and dynamically assigns a threat analysis, blocking its access.
In this way, link analysis becomes very similar to virus detection. Look for the known bad (signatures), and identify the potentially bad (heuristics).
My favorite prevention software in this aspect is Blue Coat's K9 software. It's essentially a proxy agent that sits on your machine monitoring all outbound requests for content, and will block access to a site using both "signatures" and "heuristics" styles of link analysis. This isn't to say that other vendor programs don't have that capability, I've just had more familiarity with the K9 software.
If anyone would like me to post a complete review of K9, please let me know.
To be clear, I'm not advocating such URL analysis tools over outbound firewalls. I think they should be considered another layer of defense. Let them catch the majority of "bad" requests to prevent what they can, and still keep the outbound firewall active in case an unknown program is detected making an outbound connection attempt. Also note that these days many malware authors attempt to hide the outbound connection attempts through other means so these may not get detected by the outbound firewall, but it's still worth keeping around.
Friday, August 23, 2013
Labels:
Dell
,
noise reduction
,
PowerEdge 2950
Dell PowerEdge 2950: Silence the Noise, Kevin's Chassis Fan Fix
This post relates the tale of our brother-in-arms Kevin, who valiantly pursued the quest to silence the PowerEdge 2950 with dogged determination. The focus here is on the chassis fans. I'll post another entry regarding his efforts to swap out the power supply fans and hack the BMI. And now, to the fixes.
Read More
Step 1: Update the BMC firmware
After updating the Baseboard Management Controller firmware, the fan speed slopes changed. The fans now have a lower minimum speed and are noticeably quieter. Dell Poweredge 2950 BMC firmware update v.2.50, A00
Step 2: Replacement fans
I selected the Top Motor FA60TM3BMP as a replacement, since the FA60TM3BEP isn't carried any more by BestByte.com. A representative at BestByte named Barry was very helpful in referencing the spec sheet for top motor fans they did carry.
The Top Motors seem to be just a re-brand of Dynatron parts. According to the spec sheet the Top Motor FA60TM3BMP corresponds to the Dynatron DF126025_M. With 7000RPM, 38.28cfm, 48.1 DBA, and 13.55 mm/H2O of static pressure, these fans should give an acceptable margin for error.
Now for the frustrating parts. All the replacement fans needed to have their connectors swapped with the stock fans, as well as having the closed mounting holes opened up. A simple re-pin didn't work because the pins are not compatible for a simple swap. Specifically, the Top Motor pins were too large for the stock fan connector.
The solution:
Also of note is that the power supplies had the same high speed fans as the stock cpu coolers. Solder swapping the connectors for those as well with two more Top Motors resulted in the server failing to get past the first POST screen when the power supply fans are at too low of an RPM. For the time being, the stock PSU fans were reinstalled, but the planned solution is to hack the BMC.
- Simply cut the wires:
- Swap the connector leads, solder the wires back together, and apply some heat shrink tubing:
Voila! Some Top Motor fans with proper connectors. - Used a hacksaw, long handled pliers, and a Dremel cutting wheel to open up the closed mounting holes:
- Modified fans in cages:
- Modified fans installed:
- Results:
Poweredge 2950 at idle with modified fans
Poweredge 2950 at full load with modified fans
Some users on the Hacking guide thread have commented that the BMC hacking guide works on the 2900 as well as the 2800, and by extension the 2950.
One more thing. Unrelated but someone may find it useful. I was able to replace a 2950 Gen I motherboard for a 2950 Gen II motherboard and everything worked just fine. No weird boot up issues or anything. With the Gen II motherboard I now have quad core cpu capability, and I was able to install those nice quad cores without issue.
Friday, August 16, 2013
Labels:
Altamira
,
CTF
Last weekend, I participated in the first annual Altamira CTF competition, and I'm posting a review of my experience for those that would like to know what it was like and for those that may be interested in attending next year. The general overview of the game can be found on Altamira's Game Information Page, but the synopsis can be parsed down to three core objectives:
Offense
Read More
Altamira CTF 2013: Lessons Learned
Last weekend, I participated in the first annual Altamira CTF competition, and I'm posting a review of my experience for those that would like to know what it was like and for those that may be interested in attending next year. The general overview of the game can be found on Altamira's Game Information Page, but the synopsis can be parsed down to three core objectives:
- Defend a nuclear reactor from attack as well as maintain services and generate energy
- Actively exploit and attack opponents to obtain flags
- Scan network and identify hidden assets and obtain flags
Before the game began, we were provided with the rules of the game along with our internal/external IP space. Each team had a similar set up.
Tools
All team members used Kali 1.0.4 as their operating system of choice
for game play. Tools within Kali used in
the game included: metasploit 4.7.0 with postgres 9.1, armitage, nessus 5.0.3,
and ufw/gufw.
Special note. If using metasploit 4.7.0 as your framework, you must use Nessus 5.0.3 if you want to load the Nessus plug-in. Visiting Tenable's download page only leads you to links for 5.2.x. Here are a couple shortcuts to 5.0.3 to save you time from analyzing their site code and finding the package names:
Sharing
the Database
Prior to the competition, the team agreed that to eliminate duplication
of effort, we needed to use a shared database for offensive
purposes. At the same time, the shared
database needed to be protected. We set
up the postgres database, nessus, and an armitage team server instance to all
be accessible remotely. To protect the
centralized infrastructure, we leveraged ufw/gufw, which works great since it starts out with a default deny for incoming traffic. When the competition started, team members quickly fed me their IP addresses, and I loaded the rules into gufw. I'll be posting a separate guide on how to share out the database, which is simple once you've done it a couple times.
Defense
Once the game began, offense provided the defense the results of
scanning the team Scram server with the identified ports (80, 22). Accessing the team server via HTTP on port 80
showed us the web page where the actual Scram game was running:
Check out the video above for the full briefing, or click here to jump to what our interface looked like.
After logging in to the server via SSH, the
root password was changed. With the
password changed, the defenders set about learning how to play the Scram game
since megawatts per hour (MwH) was a scored component.
Discoveries
We learned about halfway through the game that another team analyzed their team
server's game code and found a way to accelerate their point gain. We did not dedicate enough resources to
emulating this approach until the last hour of the game (fail!).
The Scrame game itself was built in python and had multiple
components. There was an environment.py
script which contained variables and formulas in which the score was
calculated. In another directory was a
network.py script which took the results and sent them to the scorebot. The packet was ultimately crafted using scapy
in another python script. The service
for these python scripts was something called "twisted" that utilized
websocket initialization scripts, also written in python. A danger in altering the game code was that
depending on the alteration, teams would need to restart the service components
and do so in the right order. Failure to do so would result in a loss of
accumulation of points. Note to self, learn things...
Offense
As soon as the team was wired in, offense scanned both the
internal network and the external network to identify hosts. The scan was run
using Armitage, and because we didn't know of any hosts, we included the whole
ranges. Results populated
"live" hosts and "ghost" hosts. Scanning a "ghost" host resulted in
no open ports. Not having read
Armitage's documentation, Armitage may automatically place an entry for each
host in the range through its scanning script.
For cleaning our visual display, we set the view to table view and
removed "ghost" hosts. Note to self, learn to avoid this in the future.
With the remaining live targets, we ran multiple port scans
and nessus scans against them. We were
able to get shells on two or three linux machines. All Windows machines were running Windows 7. More on that in the next paragraph.
Discoveries
From one of the shells, there was limited command line
access. Commands like "ls"
worked, but "cd" and pretty much anything else did not. We did not attempt to load another shell (fail!).
Several of the Windows machines were Windows 7 Service Pack
1, build 7601. The default install of
Metasploit 4.7.0 does not have any exploits for that build. MS_09_50 did not work. Enumerating these machines using the endpoint
mapper auxiliary module revealed 6 to 7 machines were part of the same domain
(PONY_GROUP), while 3 to 4 were part of another domain (NOVAC). Had we gained access to one Windows 7 machine
in either domain, we could've used pass the hash to get access to the others. Shoulda, coulda, woulda...
Lessons Learned
- CTFs in this format are "unknowns", meaning be prepared the best you can and expect something new and different
- In CTFs where hacking the game mechanism itself is allowed, resources should be devoted to analyzing the game code at the start
- Where CTFs provide information about the infrastructure, such as in this case the game Scram, study and any all related resources prior to the start of the competition
- Have a tool such as WinSCP or know how to secure copy in Linux to copy files and folders from an SSH session
- Identify all known metasploit modules/exploits built by others for use against Windows 7 and test usage
Additional Information
Wednesday, August 14, 2013
Labels:
Cyber (InfoSec) Competitions
,
cyber challenges
,
USCC
Read More
U.S. Cyber Challenge 2013
In June I attended the U.S. Cyber Challenge, and for those that don't know about it, I'm posting a review because this is one event that deserves more attention.
To sum up what it is, here's an excerpt from the main website:
USCC Summer Camps feature one week of specialized cyber security training that includes workshops, a job fair, and a culminating “Capture the Flag” competition. The workshops are lead by college faculty, top SANS Institute instructors, and cyber security experts from the community. The workshops and presentations focus on a variety of topics ranging from intrusion detection, penetration testing and forensics. Participants can also participate in a job fair that provides them the opportunity to meet with USCC sponsors and discuss potential employment. The week-long program ends with a competitive “Capture the Flag” competition and an awards ceremony attended by notables in the cyber security industry and government.
Qualifying
Image Credit: vmpyrdavid |
In order to attend the camp, you have to compete in the initial challenge. This year that involved packet capture analysis, which tested not only your ability to filter through the pcap for relevant information, but also to identify in the pcap what type of web-based attack was happening (XSS, SQLi, etc). Based on how well you performed, you are sent an invitation email. I think the invitation is really just a process to weed out those individuals that are really motivated or genuine about attending the event, because to complete the invitation, you have to seek out two letters of recommendation (LoR). One of those letters can be personal from a friend, but the other has to be from either a teacher or your boss. People not willing to go to those lengths are not given further consideration.
After submitting the LoR, I received an email confirming my selection along with instructions for attendance!
Event Venue
Apparently, the event venue changes from year to year and in 2013 it was held at the Hotel Roanoke in Virginia. USCC attendees were slated two to a room. Make sure you check out the event logistics for future years so you don't mistakenly tell someone they can come room with you and then have to nix those plans later.
The hotel had all the required amenities including a gym, pool, and cafe/bar. Hotel wireless is bound to the credentials of the person checking in, and there's a max of 5 devices per login.
Attached to the hotel was a conference area where the USCC conference would be held. The wireless signal reached in all areas, but perhaps due to the number of attendees, bandwidth was slow. Don't plan to rely on your carrier's hotspot because multiple attendees from multiple carriers had spotty service in the conference area. Your mileage may vary.
The Classes
The cyber camp is really a 4-day session of differing topics each day, capped off with a capture the flag competition at the end of the week. These were the daily sessions:
- Day 1 - Scapy
- Day 2 - Android Pen Testing
- Day 3 - Memory Forensics with Redline and Volatility
- Day 4 - Tactical Incident Handling & Hacking Techniques
Each session included SANS excerpts from their classes focusing on these topics, along with instructors who teach those classes. The quality of instruction was extremely high, and for that alone the USCC is worth attending.
At the end of day 2 there was an ethics discussion panel to review topics such as expectation of privacy. Each table was grouped into a team for discussion to review a couple hypothetical scenarios, with the results shared in an open dialog. It was very interesting conversation.
At the end of day 4 there was a job fair. Already gainfully employed, I did not attend.
Capture the Flag
Day 5 culminated into a CTF, with attendees grouped randomly into teams of four or five. The infrastructure for the CTF was provided by iSight through their Threatspace CTF platform. The challenges, from what I can remember now, were largely based on web-oriented and forensics-oriented approaches. I used OWASP-ZAP and HTTP Header manipulation tools to identify some flags from the web-based challenges. For the forensics-focused we used Wireshark, John the Ripper, Cain & Abel, and aircrack.
Although my team did not win or place, it was fun learning new tools and their extended usage.
Overall, this experience was fantastic, and I highly recommend everyone give themselves the opportunity to experience it. Keep checking the USCC Cyber Quest site for future challenges!
Thursday, May 30, 2013
Labels:
Android
,
OUYA
I got my Kickstarter backer version of the OUYA last week. I took it with me on a business trip to test it out in the hotel and I decided to post my review for those who are interested.
Form Factor
The OUYA console itself is tiny, which is what excited me most. I've long dreamed of bringing a gaming console with me on business trips without the hassle of worrying about packing it in checked baggage or the burden of lugging it around in my carry-on. I can easily toss the OUYA into my carry-on, so the form factor is a big plus for mobility. The only downside I've noticed to doing that is that the controller bumps up against objects in my bag, which activates Bluetooth connection attempts. I can see that easily draining the controller batteries so make sure to remove one of them prior to traveling if you bring it with you.
The console has a good weight to it, thanks to the weight placed in the bottom. Otherwise it might feel too light. If you haven't seen the inside of the OUYA yet, check it out. The controller also feels really good to handle. The metal finish components on the top of the controller slide off and each side stores a battery. The top middle flat area is a touch pad for mouse control, which works like a champ.
User Interface
As you'll see if you research the OUYA at all, anyone who's experimented with it complains first about the sparse app landscape. The OUYA homepage has a counter in the upper right to inform visitors of the current number of games/apps available, which at the time of this post is at 128. Thankfully, OUYA was developed to be an open platform, so you can sideload apps/games.
Usability
Overall, everything works as expected. I have noticed a slight delay when using the directional pad to navigate through selecting letters when filling in information or navigating through menus. I don't know if it's the controller Bluetooth latency, the directional pad connectivity to the controller board, or the console controller board latency but I find myself having to press the directional pad more than once to accomplish the task at hand. Aside from that, everything else flows smoothly.
Side Loading Apps
IT World has a good write-up on side loading apps, so I don't really have anything new to add there. To sum up, the best way to accomplish this is to throw the .apk file you want to load into a cloud-based file share like Dropbox. Then take the download URL and pop it into a URL shortener service. Once you get the smaller URL, you can open the OUYA browser to access the apk file. Download it, install it, and then open it via Make > Software.
I've found success side loading a Netflix 1.8.1 apk file from the xda forums. I tried 2.1.2 from androiddrawer.com, but it wouldn't install. Interestingly, the 1.8.1 apk shows an OUYA icon when the app is accessed via the Storage manager, but the 2.1.2 apk does not. I may look at them forensically to see what the different is later.
Summary
I'm impressed with the form factor and weight, both inside and out. The user interface is easy to navigate, which is a definite plus. The biggest aspect to the OUYA is its promise of an open gaming system. The biggest downside is that it's a sparse landscape right now. Hopefully it will continue to receive enough media attention to keep that awareness in the forefront of developer's minds. For me personally, it's perfect for travel and I can watch Netflix on it while being on my laptop at same time. I'm eager to see what the future holds for this device!
Read More
OUYA First Impressions
Form Factor
The OUYA console itself is tiny, which is what excited me most. I've long dreamed of bringing a gaming console with me on business trips without the hassle of worrying about packing it in checked baggage or the burden of lugging it around in my carry-on. I can easily toss the OUYA into my carry-on, so the form factor is a big plus for mobility. The only downside I've noticed to doing that is that the controller bumps up against objects in my bag, which activates Bluetooth connection attempts. I can see that easily draining the controller batteries so make sure to remove one of them prior to traveling if you bring it with you.
The console has a good weight to it, thanks to the weight placed in the bottom. Otherwise it might feel too light. If you haven't seen the inside of the OUYA yet, check it out. The controller also feels really good to handle. The metal finish components on the top of the controller slide off and each side stores a battery. The top middle flat area is a touch pad for mouse control, which works like a champ.
User Interface
When you first power on the OUYA, you have to register for a new account or login to your existing account. After you login, there are four menu options: Play, Discover, Make, and Manage. The Play area takes you to the games you have installed and the games you've downloaded waiting to be installed. Discover brings you to the app market with different filters like "Staff Picks" you can navigate through, or you can search by name. Make is the developers section, but it's also where you can access installed applications. If you sideload an app, it'll be available in Make > Software. I'll discuss the process to sideload an app later. Lastly, through Manage, you can connect to wireless networks, check for system updates, and access the advanced menu which is the stock Settings menu in Android 4.1. Through this menu you can access the Storage area for any installed apps or downloaded apps.
As you'll see if you research the OUYA at all, anyone who's experimented with it complains first about the sparse app landscape. The OUYA homepage has a counter in the upper right to inform visitors of the current number of games/apps available, which at the time of this post is at 128. Thankfully, OUYA was developed to be an open platform, so you can sideload apps/games.
Usability
Overall, everything works as expected. I have noticed a slight delay when using the directional pad to navigate through selecting letters when filling in information or navigating through menus. I don't know if it's the controller Bluetooth latency, the directional pad connectivity to the controller board, or the console controller board latency but I find myself having to press the directional pad more than once to accomplish the task at hand. Aside from that, everything else flows smoothly.
Side Loading Apps
IT World has a good write-up on side loading apps, so I don't really have anything new to add there. To sum up, the best way to accomplish this is to throw the .apk file you want to load into a cloud-based file share like Dropbox. Then take the download URL and pop it into a URL shortener service. Once you get the smaller URL, you can open the OUYA browser to access the apk file. Download it, install it, and then open it via Make > Software.
I've found success side loading a Netflix 1.8.1 apk file from the xda forums. I tried 2.1.2 from androiddrawer.com, but it wouldn't install. Interestingly, the 1.8.1 apk shows an OUYA icon when the app is accessed via the Storage manager, but the 2.1.2 apk does not. I may look at them forensically to see what the different is later.
Summary
I'm impressed with the form factor and weight, both inside and out. The user interface is easy to navigate, which is a definite plus. The biggest aspect to the OUYA is its promise of an open gaming system. The biggest downside is that it's a sparse landscape right now. Hopefully it will continue to receive enough media attention to keep that awareness in the forefront of developer's minds. For me personally, it's perfect for travel and I can watch Netflix on it while being on my laptop at same time. I'm eager to see what the future holds for this device!
Thursday, May 16, 2013
Labels:
Windows 8
This is a slight modification of the instructions over at Lifehacker.com, but I just tested this in Windows 8 and it works!
Read More
Windows 8: Force Windows to Use Your Wired Connection Instead of Wi-Fi
Image Credit: Lifehacker.com |
- Open the Control Panel (Win Key + X > Select Control Panel)
- Click on Network and Internet and then on Network and Sharing Center
- On the left hand side, click on Change adapter settings
- Press Alt to open the Folder Menu, and click on Advanced and then Advanced Settings
- Now you see the same graphic as that depicted above!
Make sure Ethernet is above Wi-Fi, click OK, and you're done!
Thanks to Melanie Pinola at Lifehacker.com for this tip. Be sure to check out her article for reader comments that may provide additional tips.
Wednesday, May 15, 2013
Labels:
spam
This one missed the spam filters. I'm posting this to benefit others because a family member asked if this was legit. Before we post the analysis, check out the email with formatting included:
Disclaimer: I am not responsible for any consequences if readers decide to visit the IP addresses or websites mentioned in this report.
Interestingly, there's just enough ambiguous language present to make it past the spam filters. Also, there are no links to any websites encouraging a click-through, so this isn't necessarily a phishing email. The errors in spelling and word usage are underlined above, which aside from the formatting, are clear indicators this is not a professional email.
Now let's check out the only potential source indicator, the email address. According to domaintools.com, northstaffing.com the website was re-registered May 11, 2013 but the specific details are "privacy protected". The nameservers point us to ns1.hostven11.ru and ns2.hostven11.ru. Russian nameservers, huh? That makes me feel safe and that the email is trustworthy. After all, I'm sure most Portuguese companies have their websites handled through Russian registrars.
Read More
Fake Email: Career Boost - Rent One Square Meter of Your Garage
---------- Forwarded message ----------Analysis
From: <manthonygarland@northstaffing.com>
Date:
Subject: New Career Boost – Apply Today
To: <spam recipient>
Cc: HAVE A GOOD DAY <spam recipient> !
Garland offers logistics services which are created in conjunction with clients to meet their needs.
Garland suggests solutions in how we provide our consumers\buyers logistics at either our premises or at those of our buyers.
We can diminish expenses for Warehousing | Personnel | Distribution by:
altering costs from fixed to variable.
contracting out gives motivation for quality.
makes our buyers to concentrate on core activities and sales amount.
scale effect.
lessening of administration prices.
Garland has invested heavily to render a upscale service.
It has created its own solution which allow customers receive more benefit throughout the logistics process.
GARLAND, Would like to offer You an opportunity to RENT only ONE square meter of Your garage
or home space to accomplish following logistics activities for our company:
Arrival of luggage
Stock entry
Stock control
Distribution If You are ready for transporting max 160 lbs - Please contact us as soon as possible
Tracking and tracing
Our company can offer You 145dollars payment for each week of 1 square meter rent.
If You are able to carry more space, please reply back immediately.
We have profitable premiums for beneficial employees.
In our stocks Garland can make the following operations:
Pick and Pack – Receiving, Checking, Separation, Labeling, Packing and Postage.
Warehousing – Checking, Packing and Storage.
Preparation of Orders – Labeling, Packing and Distribution.
Stock Control - Email notifications.
Statistics - Statistical information if demanded.
Garland, a Portuguese independent company has serious experience in the Logistics of Fashion, Literature, HiTech, Drilling Equipment & Tires.
Why not hear more about our entire assortment of services and contact us?
Reply back with Your resume immediately and we will rent even Your roof space!
Have a blessed day!
Disclaimer: I am not responsible for any consequences if readers decide to visit the IP addresses or websites mentioned in this report.
Interestingly, there's just enough ambiguous language present to make it past the spam filters. Also, there are no links to any websites encouraging a click-through, so this isn't necessarily a phishing email. The errors in spelling and word usage are underlined above, which aside from the formatting, are clear indicators this is not a professional email.
Now let's check out the only potential source indicator, the email address. According to domaintools.com, northstaffing.com the website was re-registered May 11, 2013 but the specific details are "privacy protected". The nameservers point us to ns1.hostven11.ru and ns2.hostven11.ru. Russian nameservers, huh? That makes me feel safe and that the email is trustworthy. After all, I'm sure most Portuguese companies have their websites handled through Russian registrars.
Next, I decided to do an nslookup on the domain, which gives me an IP address of 84.2.39.106. According to Blue Coat's Site Review, the IP is categorized as "Government/Legal". McAfee's Threat Intelligence reports the IP as "Unverified", but flags the IP as being geo-located in Hungary. There's a correlation for the .ru nameservers. If you send an HTTP request to the IP address by itself it's obvious it's a shared hosting server.
And finally, visiting northstaffing.com shows an empty directory as the index page.
Conclusion
This email is obviously fake, but it's interesting in that it's not what I expected, which is a malware delivery attempt. Instead, it appears to offer the promise of semi-legitimacy. The relative did not pursue things further, but I wonder what would've happened. A colleague told me that there was a news story a while back where someone rented out their garage in such a manner and ended up getting arrested for drug trafficking.
The moral of the story is never believe an email from someone you don't know!
Tuesday, May 14, 2013
Labels:
Google
,
YouTube
Ah, Google. In perhaps what is another moonshot, Google has taken what I believe to be, the first steps on the path to challenge cable companies and the channels packages we're all forced into buying even though nobody watches G4 now that it's all grown up. This move has two key implications: (1) challenging cable companies to provide channel-by-channel subscriptions, and (2) YouTube video producers increasing the quality of their content, but moving that content to the paid channels listings and attempting to get their viewers to follow.
On May 9th, YouTube announced a pilot program for paid subscription channels. Excerpted from their announcement:
Suffice to say that for $1, there will be people that try the channel and don't like it, but to keep people's interest the content will have to be great. Which is good, because this model will force all potential YouTube producers to step it up a notch if they want to earn revenue amounts past the total that ads can provide. That begs the question, will we begin to see channels that used to be free start switching over to the $1 a month? I doubt it. I think as with Netflix trying to spin the DVDs off as a separate service, the established customer base will be resistant to change.
I think the best way existing channels can take advantage of the new distribution method is to create a separate YouTube channel, and start pointing the freeloaders (me included) over to the paid version. Expect the call to action at the end of YouTube videos to start including the saying, "check out my/our other channel," which when clicked will bring you to the paid channel line up.
This improved content reinforces the You in YouTube because it's content created for and demanded by us. Although there are numerous high quality shows on television like my current fav Hannibal, we're forced to pay for that channel along with however many others we don't watch.
Speaking of not watching, a current trend for cancelled shows and those on the verge of cancellation to seek alternate networks for distribution, such as Arrested Development. I foresee production companies watching this effort by YouTube closely to see if they can farm their "less successful" (by Nielson ratings anyway) shows for profit through that medium. But don't get me started on why Nielson ratings are dead.
Here's to YouTube's efforts and hoping they're successful! I can't wait to stop subscribing to Lifetime for Women...
Read More
YouTube Now Offering Subscription Channels
On May 9th, YouTube announced a pilot program for paid subscription channels. Excerpted from their announcement:
Starting today, we’re launching a pilot program for a small group of partners that will offer paid channels on YouTube with subscription fees starting at $0.99 per month. Every channel has a 14-day free trial, and many offer discounted yearly rates. For example, Sesame Street will be offering full episodes on their paid channel when it launches. And UFC fans can see classic fights, like a full version of their first event from UFC’s new channel. You might run into more of these channels across YouTube, or look here for a list of pilot channels. Once you subscribe from a computer, you’ll be able to watch paid channels on your computer, phone, tablet and TV, and soon you’ll be able to subscribe to them from more devices.Interestingly, none of the paid channels include YouTube sensations like Annoying Orange and the like, although by now they may have enough of an established revenue stream through alternate means that YouTube's one dollar a month wasn't enough to draw them in. But think about that for a second. One dollar a month. Using an example like Annoying Orange that catches upwards of 150,000 views per episode, posting just one video a month would earn $150,000 minus taxes and fees, right? $150,000 in one month? Sure, I'll take that.
Suffice to say that for $1, there will be people that try the channel and don't like it, but to keep people's interest the content will have to be great. Which is good, because this model will force all potential YouTube producers to step it up a notch if they want to earn revenue amounts past the total that ads can provide. That begs the question, will we begin to see channels that used to be free start switching over to the $1 a month? I doubt it. I think as with Netflix trying to spin the DVDs off as a separate service, the established customer base will be resistant to change.
I think the best way existing channels can take advantage of the new distribution method is to create a separate YouTube channel, and start pointing the freeloaders (me included) over to the paid version. Expect the call to action at the end of YouTube videos to start including the saying, "check out my/our other channel," which when clicked will bring you to the paid channel line up.
This improved content reinforces the You in YouTube because it's content created for and demanded by us. Although there are numerous high quality shows on television like my current fav Hannibal, we're forced to pay for that channel along with however many others we don't watch.
Speaking of not watching, a current trend for cancelled shows and those on the verge of cancellation to seek alternate networks for distribution, such as Arrested Development. I foresee production companies watching this effort by YouTube closely to see if they can farm their "less successful" (by Nielson ratings anyway) shows for profit through that medium. But don't get me started on why Nielson ratings are dead.
Here's to YouTube's efforts and hoping they're successful! I can't wait to stop subscribing to Lifetime for Women...
Monday, May 13, 2013
Labels:
browsers
Today I'd like to share a thought regarding a scenario where applications that users are allowed to use are controlled via Group Policy and/or a desktop agent and outbound Internet access is inspected by a gateway proxy. In these types of controlled environments, Internet Explorer is typically the preferred browser by systems administrators due to its ease of manipulation from a centralized management perspective.
That aspect of centralized management can be a pain in the butt from a user perspective. If the version of Internet Explorer is outdated, runs slow, and/or is generally difficult to use then most users seek an alternative. The most common browsers people commonly flock to in such a situation are Mozilla Firefox and Google Chrome.
When people attempt to download Firefox, they may get blocked by the proxy if that proxy is using a content filtering solution such as Blue Coat's WebFilter or some other enterprise solution. These blocking mechanisms are broad, category-based mechanisms like "software downloads" or "web applications". Interestingly, Mozilla Firefox's download URL is categorized as software downloads, and if that category is blocked then users are prevented from installing Firefox. All pages for the Google Chrome download on the other hand are categorized as "Search Engines/Portals". Obviously blocking search engines is counter-productive, so the site is allowed, and thus the download of Google Chrome.
When the Google Chrome installation is first attempted, it will fail because the default install requires elevated privileges. However, when the installation fails, Google is kind enough to ask if we want to try to install without admin privileges. After clicking yes, Google Chrome is able to be installed! Thus, after about 15 minutes of tinkering, we are able to circumvent our organization's centralized browser control.
Fortunately for this organization, we have a proxy. To prevent users from using Chrome even after they go through this process, simply block the User-Agent HTTP Request Header string using RegEx. If you're unsure of the User-Agent string, check out whatismyuseragent.com. The RegEx match to block Chrome can simply be "(.*)Chrome(.*)" since normally Chrome's UA String looks like this:
Of course, a user could still get around this by changing Chrome's UA string, but that's a story for another day.
Read More
Authorized Applications and Google Chrome
Today I'd like to share a thought regarding a scenario where applications that users are allowed to use are controlled via Group Policy and/or a desktop agent and outbound Internet access is inspected by a gateway proxy. In these types of controlled environments, Internet Explorer is typically the preferred browser by systems administrators due to its ease of manipulation from a centralized management perspective.
That aspect of centralized management can be a pain in the butt from a user perspective. If the version of Internet Explorer is outdated, runs slow, and/or is generally difficult to use then most users seek an alternative. The most common browsers people commonly flock to in such a situation are Mozilla Firefox and Google Chrome.
When people attempt to download Firefox, they may get blocked by the proxy if that proxy is using a content filtering solution such as Blue Coat's WebFilter or some other enterprise solution. These blocking mechanisms are broad, category-based mechanisms like "software downloads" or "web applications". Interestingly, Mozilla Firefox's download URL is categorized as software downloads, and if that category is blocked then users are prevented from installing Firefox. All pages for the Google Chrome download on the other hand are categorized as "Search Engines/Portals". Obviously blocking search engines is counter-productive, so the site is allowed, and thus the download of Google Chrome.
When the Google Chrome installation is first attempted, it will fail because the default install requires elevated privileges. However, when the installation fails, Google is kind enough to ask if we want to try to install without admin privileges. After clicking yes, Google Chrome is able to be installed! Thus, after about 15 minutes of tinkering, we are able to circumvent our organization's centralized browser control.
Fortunately for this organization, we have a proxy. To prevent users from using Chrome even after they go through this process, simply block the User-Agent HTTP Request Header string using RegEx. If you're unsure of the User-Agent string, check out whatismyuseragent.com. The RegEx match to block Chrome can simply be "(.*)Chrome(.*)" since normally Chrome's UA String looks like this:
Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.31 (KHTML, like Gecko) Chrome/26.0.1410.64 Safari/537.31The (.*) is a wildcard that will catch everything leading up to "Chrome" and everything after.
Of course, a user could still get around this by changing Chrome's UA string, but that's a story for another day.
Sunday, May 5, 2013
Labels:
CCDC
,
CTF
,
maccdc
This is the fourth part of a series of blog posts I'm writing to relate the various things I learned from getting to experience the glory that is MACCDC. Here is the table of contents:
1) Make the most of your time. This can be construed multiple ways, but if team members don't have access to the machines as expected, immediately start finding an alternate path. Familiarize yourself with the scorebot gui to locate flags and injects. If you're waiting for an answer on something, try to multi-task. There ideally should never be a time where someone is sitting and doing absolutely nothing.
2) Don't get over-confident. I'm guilty of this myself. I set down the basics on two linux boxes without setting deeper security, and they both got owned on day two. So, no matter the standings point-wise, don't stop securing a system until the end of the competition.
3) Communication. It's the team captain's responsibility to receive and assign injects. At the same time, the team captain is going to be pulled in multiple directions. Therefore, the team captain needs to effectively disseminate the injects so that the whole team can be aware of all the details. This can be done by having the team captain log in to every machine so each team member can see the injects, or he/she can assign the injects. If assigning injects, the approach should be the team captain asking the team for familiarity with the subject and assigning the inject to the team member with the most familiarity. If no one knows the inject subject, then the team captain should assign it to the person with the most availability to multi-task.
3) Receiving injects. Injects are a high scoring component of the game, so the team needs to identify all potential ways injects can be delivered. This year that included a) email, b) phone, and c) sneakernet. Within the first two hours of competition time, these methods should be identified and monitored.
4) Inject handling. When team members receive an inject, those same team members may get pulled away from completing it. If so, then the team member that was handling that inject needs to hand it off to another team member to ensure it gets completed or progress is made. Basically, an inject should never stop being worked. This will ensure the team receives points for completing the inject and that it will be finished in case another inject builds upon it. When injects are received the team captain needs to identify the deadline and keep monitoring progress as time ticks down to the deadline so the inject completion does not fall by the wayside.
5) Scorebot. Each team member needs to, at some point, open scorebot and monitor the respective services on their assigned VMs. Identify the services with the most points scored, and try to ensure that they stay active. If the team members do not have time, then the team captain can perform this function on the "high side" (if there is one).
And that wraps up my lessons learned from a high-level. I hope this helps those preparing for CCDC-type competitions. Check out Rob Fuller's presentation for more technical detail.
Read More
MACCDC 2013, A Blue Teamer's Lessons Learned: Part 4 - Game Time
This is the fourth part of a series of blog posts I'm writing to relate the various things I learned from getting to experience the glory that is MACCDC. Here is the table of contents:
#1 - Team selection
#4 - Game Time
Game Time
1) Make the most of your time. This can be construed multiple ways, but if team members don't have access to the machines as expected, immediately start finding an alternate path. Familiarize yourself with the scorebot gui to locate flags and injects. If you're waiting for an answer on something, try to multi-task. There ideally should never be a time where someone is sitting and doing absolutely nothing.
2) Don't get over-confident. I'm guilty of this myself. I set down the basics on two linux boxes without setting deeper security, and they both got owned on day two. So, no matter the standings point-wise, don't stop securing a system until the end of the competition.
3) Communication. It's the team captain's responsibility to receive and assign injects. At the same time, the team captain is going to be pulled in multiple directions. Therefore, the team captain needs to effectively disseminate the injects so that the whole team can be aware of all the details. This can be done by having the team captain log in to every machine so each team member can see the injects, or he/she can assign the injects. If assigning injects, the approach should be the team captain asking the team for familiarity with the subject and assigning the inject to the team member with the most familiarity. If no one knows the inject subject, then the team captain should assign it to the person with the most availability to multi-task.
3) Receiving injects. Injects are a high scoring component of the game, so the team needs to identify all potential ways injects can be delivered. This year that included a) email, b) phone, and c) sneakernet. Within the first two hours of competition time, these methods should be identified and monitored.
4) Inject handling. When team members receive an inject, those same team members may get pulled away from completing it. If so, then the team member that was handling that inject needs to hand it off to another team member to ensure it gets completed or progress is made. Basically, an inject should never stop being worked. This will ensure the team receives points for completing the inject and that it will be finished in case another inject builds upon it. When injects are received the team captain needs to identify the deadline and keep monitoring progress as time ticks down to the deadline so the inject completion does not fall by the wayside.
5) Scorebot. Each team member needs to, at some point, open scorebot and monitor the respective services on their assigned VMs. Identify the services with the most points scored, and try to ensure that they stay active. If the team members do not have time, then the team captain can perform this function on the "high side" (if there is one).
And that wraps up my lessons learned from a high-level. I hope this helps those preparing for CCDC-type competitions. Check out Rob Fuller's presentation for more technical detail.
Monday, April 29, 2013
Labels:
CCDC
,
CTF
,
maccdc
This is the third part of a series of blog posts I'm writing to relate the various things I learned from getting to experience the glory that is MACCDC. Here is a "table of contents" which I'll update with relevant posts:
The team needs to have practice sessions where the systems are actually available to be utilized. Scenarios should be utilized for practices to provide a sense of what the competition will be like. During the practice sessions, each person should tackle the systems they're comfortable with first and then swap to something they are not comfortable with, such as starting on Windows and switching to Linux.
Any tools desired to be used for competition time need to be vetted first. Concerning tools, a quick note:
Toolsets - 1st party. Each system has its own built-in security tools that could be leveraged. Windows Server 2008 has an advanced firewall in which rulesets can be provisioned pretty granularly. Linux distros typically have IPTables. These types of basic built-in tool sets need to be learned through practices. Before the shiny, fancy tools can be downloaded from the internet station these built-in tools are all you have to work with.
Toolsets - 3rd party. Everyone loves finding research indicating a particular tool can do this or that, but unless you've actually used it and are comfortable implementing it, it's a waste of time and effort. Team members should prune the tool set at least a month before competition time to narrow down a definitive list. During that month or so prior to the competition, during practice sessions these tools need to be tested and tested from both a standalone and live-attack scenario (red-team sessions).
Scripting. Red teamers automate as much as possible, and so should the Blue team. Find simple scripts that can do some brute force defense while trying to lock down the system. These scripts can be kicked off immediately and just run in the background, without waiting to be attacked first. If they're already running from the start, at least they immediately kick off a Red team member, which prevents them from setting up persistence or at least slows them down.
Continue reading MACCDC 2013, Part 4
Read More
MACCDC 2013, A Blue Teamer's Lessons Learned: Part 3 - Practice
This is the third part of a series of blog posts I'm writing to relate the various things I learned from getting to experience the glory that is MACCDC. Here is a "table of contents" which I'll update with relevant posts:
#1 - Team selection
#3 - Preparation (PRACTICE!)
#4 - Game Time
Preparation (PRACTICE!)
The team needs to have practice sessions where the systems are actually available to be utilized. Scenarios should be utilized for practices to provide a sense of what the competition will be like. During the practice sessions, each person should tackle the systems they're comfortable with first and then swap to something they are not comfortable with, such as starting on Windows and switching to Linux.
Any tools desired to be used for competition time need to be vetted first. Concerning tools, a quick note:
Da Tools
Toolsets - 1st party. Each system has its own built-in security tools that could be leveraged. Windows Server 2008 has an advanced firewall in which rulesets can be provisioned pretty granularly. Linux distros typically have IPTables. These types of basic built-in tool sets need to be learned through practices. Before the shiny, fancy tools can be downloaded from the internet station these built-in tools are all you have to work with.
Toolsets - 3rd party. Everyone loves finding research indicating a particular tool can do this or that, but unless you've actually used it and are comfortable implementing it, it's a waste of time and effort. Team members should prune the tool set at least a month before competition time to narrow down a definitive list. During that month or so prior to the competition, during practice sessions these tools need to be tested and tested from both a standalone and live-attack scenario (red-team sessions).
Scripting. Red teamers automate as much as possible, and so should the Blue team. Find simple scripts that can do some brute force defense while trying to lock down the system. These scripts can be kicked off immediately and just run in the background, without waiting to be attacked first. If they're already running from the start, at least they immediately kick off a Red team member, which prevents them from setting up persistence or at least slows them down.
Continue reading MACCDC 2013, Part 4
Labels:
CCDC
,
CTF
,
maccdc
This is the second part of a series of blog posts I'm writing to relate the various things I learned from getting to experience the glory that is MACCDC. Here is a "table of contents" which I'll update with relevant posts:
If possible, teams should stage ESXi servers with VMs emulating the systems they can expect to see at MACCDC. These VMs are nothing more than basic installations including Windows 2000 (yes, really), Windows XP, Windows Server 2008, Ubuntu and more with web applications or other services. These systems can be staged and a scenario could be executed similar to that for MACCDC to get an idea of what's involved in securing each system.
Speaking from experience, if the centralized lab cannot be easily worked out or it turns out to be unreliable, team members should fall back on using VM Workstation and working with the VMs one at a time. The independent work can be performed between sessions where the main ESXi server is available or if there isn't an ESXi server at all.
Continue reading MACCDC 2013, Part 3
Read More
MACCDC 2013, A Blue Teamer's Lessons Learned: Part 2 - Staging a Lab
This is the second part of a series of blog posts I'm writing to relate the various things I learned from getting to experience the glory that is MACCDC. Here is a "table of contents" which I'll update with relevant posts:
#1 - Team selection
#2 - Preparation (staging a lab)
#4 - Game Time
Preparation: Staging a Lab
If possible, teams should stage ESXi servers with VMs emulating the systems they can expect to see at MACCDC. These VMs are nothing more than basic installations including Windows 2000 (yes, really), Windows XP, Windows Server 2008, Ubuntu and more with web applications or other services. These systems can be staged and a scenario could be executed similar to that for MACCDC to get an idea of what's involved in securing each system.
Speaking from experience, if the centralized lab cannot be easily worked out or it turns out to be unreliable, team members should fall back on using VM Workstation and working with the VMs one at a time. The independent work can be performed between sessions where the main ESXi server is available or if there isn't an ESXi server at all.
Continue reading MACCDC 2013, Part 3
Labels:
CCDC
,
Cyber (InfoSec) Competitions
,
cyber challenges
,
maccdc
MACCDC 2013, A Blue Teamer's Lessons Learned
This is the first part of a series of blog posts I'll be writing to relate the various things I learned from getting to experience the glory that is MACCDC. Here is a "table of contents" which I'll update with relevant posts:
For this year's competition, schools were allowed to have eight people to a Blue Team. The teams were organized by having a captain and co-captain oversee the other six. During the competition, the captain is responsible (among other things) for communicating business injects to the team members and is the team liaison with the white cell. Because the team captain can be asked to step away for interviews or captain-specific business injects, the co-captain should be capable of fulfilling those same duties.
The team members need to be comprised of people with a good mix of skills between Linux, Windows, Web Applications, and Firewalls. This means that every person on the team needs to be technically capable, or trained to be, by competition time. There were several instances at MACCDC 2013 where someone was pulled away to help another team member on something or complete an inject, leaving a laptop open. Ideally there should never be a time when someone's not working on a laptop (injects notwithstanding), which means that each person should be technically capable or comfortable with picking up where someone else left off.
Continue reading MACCDC 2013, Part 2
Read More
MACCDC 2013, A Blue Teamer's Lessons Learned: Part 1 - Team Selection
MACCDC 2013, A Blue Teamer's Lessons Learned
This is the first part of a series of blog posts I'll be writing to relate the various things I learned from getting to experience the glory that is MACCDC. Here is a "table of contents" which I'll update with relevant posts:
#1 - Team selection
#4 - Game Time
And so we begin number one...
Team Selection
For this year's competition, schools were allowed to have eight people to a Blue Team. The teams were organized by having a captain and co-captain oversee the other six. During the competition, the captain is responsible (among other things) for communicating business injects to the team members and is the team liaison with the white cell. Because the team captain can be asked to step away for interviews or captain-specific business injects, the co-captain should be capable of fulfilling those same duties.
The team members need to be comprised of people with a good mix of skills between Linux, Windows, Web Applications, and Firewalls. This means that every person on the team needs to be technically capable, or trained to be, by competition time. There were several instances at MACCDC 2013 where someone was pulled away to help another team member on something or complete an inject, leaving a laptop open. Ideally there should never be a time when someone's not working on a laptop (injects notwithstanding), which means that each person should be technically capable or comfortable with picking up where someone else left off.
Continue reading MACCDC 2013, Part 2
Friday, April 26, 2013
Labels:
Dell
,
noise reduction
,
PowerEdge 2950
Dell PowerEdge 2950: Silence the Noise, Part 2
First, a thank you to Neil for spurring my renewed energy into finding a solution to the 2950's noise level. For the background of this project, check out Silence the Noise, Part 1. In this post, I relate my research findings from my search for suitable fan replacements.
The Fans
To find a suitable replacement, we must first analyze the existing fans so we know the baseline for comparison. Dell PowerEdge 2950's come with Delta brushless, axial fans in a 60mm x 60mm x 38mm form factor. Dell appeared to utilize three variants, with Dell-approved replacement part numbers including JC972, PR272, YW880, and DC471. The ones in my 2950 are JC972, for which the corresponding Delta model number is PFC0612DE. It should be noted that some sites report the thickness as 35mm instead of 38mm, but according to the spec sheet, the proper thickness to keep in mind is 38mm.
Here's a recap and quick reference for the 2950 fans:
Now, if you've read my previous post, you know that I removed two fans leaving me a max of 120 CFM (~63 per fan). During that time, my CPU temp did not increase above 25 degrees. That means that across all four replacement candidates I can lower the CFM to 30 CFM max for each fan. Your mileage may vary.
On arnuschky's blog, in the comments someone reportedly fitted 60mm x 60mm x 25mm fans, so we don't have to stay with the 38mm thickness. A forum post at overclock.net provides insight into re-mapping the power pins from different fans into Dell's power connectors.
Keeping that mind, here is a list of potential replacements:
Read More
To find a suitable replacement, we must first analyze the existing fans so we know the baseline for comparison. Dell PowerEdge 2950's come with Delta brushless, axial fans in a 60mm x 60mm x 38mm form factor. Dell appeared to utilize three variants, with Dell-approved replacement part numbers including JC972, PR272, YW880, and DC471. The ones in my 2950 are JC972, for which the corresponding Delta model number is PFC0612DE. It should be noted that some sites report the thickness as 35mm instead of 38mm, but according to the spec sheet, the proper thickness to keep in mind is 38mm.
Here's a recap and quick reference for the 2950 fans:
- Dell Part Numbers: JC972, PR272, YW880, DC471
- Delta Part Number: PFC0612DE (I'm sure there are others)
- Form Factor: 60mm x 60mm x 38mm
- Air Flow: up to 67.8 CFM
- RPM: up to 12,000
- NOISE: 61.5 dB (one fan!!)
- Voltage: 12V
- Termination: 4 wire
- Features: PWM Control
The key elements we need to keep in mind for the replacement are the size (60mm x 60mm), air flow, noise, termination, and features.
To see like-manufacturer replacements, I checked Delta's website. Delta has a list of currently available fans in a similar form factor if you put in the correct search parameters. However, in the comments section of the hacking the BMI post, other PowerEdge 2950 owners reported that they swapped the 38mm thickness for thinner fans and they worked fine as long as the replacements had PWM control and a 4-wire termination. Here are some photos for reference:
Fan Label |
4-Pin Connector |
Now, if you've read my previous post, you know that I removed two fans leaving me a max of 120 CFM (~63 per fan). During that time, my CPU temp did not increase above 25 degrees. That means that across all four replacement candidates I can lower the CFM to 30 CFM max for each fan. Your mileage may vary.
On arnuschky's blog, in the comments someone reportedly fitted 60mm x 60mm x 25mm fans, so we don't have to stay with the 38mm thickness. A forum post at overclock.net provides insight into re-mapping the power pins from different fans into Dell's power connectors.
Keeping that mind, here is a list of potential replacements:
- Top Motor PWM Fan 60mm x 25mm ($4.75, max 48.8 dB, max 44.68 CFM)
- Cooljag Everflow 60mm x 25mm PWM Fan (F126025BU) ($9.99, max 33.5 dB, max 24.5 CFM)
- Evercool 60mm x 25mm High Speed PWM Fan (EC6025H12BP) ($9.99, max 36 dB, max 26.63 CFM)
- Nidec Ultraflo U60T12MUA7-57 60mm x 25mm 4-Pin PWM Fan ($4.99, max 32.5 dB, max 23 CFM)
- Dell Fan Assembly 12V DC 0.48A 60 X 25mm For Poweredge 2650 7K412 ($13.95, max 46.5 dB, max 38.35 CFM)
As you can see by visiting the links to these fans, all the power connectors are different with maybe the exception of the last one and would need to be re-pinned. Also, note that they're all thinner, coming in at a 25mm thickness instead of 38mm. Instead of trying to re-pin the power connector, I decided to check Delta's list of model numbers to see if they had a fan that I could use to more easily replace the JC972. To be posted in part 3...
Subscribe to:
Posts
(
Atom
)