[SOLVED] CPU/PSU Fan Spin For a Second and Stop on New Build

Also note that even with the battery removed if the PSU was still connected and the switch was on I have seen the 5vsb cause it to hold cmos data.
Oh? Then that was a faulty board since the battery is directly in the CMOS holding voltage circuit pathway. Removing the battery is like cutting out a 1/2 inch piece out of wire. Do you happen to remember the make and model?

It is important to remember motherboard engineers purposely use CMOS memory modules (around long before the PC) because they are volatile - EASY to reset. If they wanted user modifications to the BIOS information (the CMOS data) to be hard to reset, they would have chosen a different type memory module, like a EEPROM or the like.

That said, the power supply should never - as in NEVER EVER be left plugged in (or have the master switch on back set to on) when removing or inserting ANY hardware component on the motherboard.

and draining power by holding down the power button for 30 seconds.
That's an old wife's tale and does nothing. On old AT (before ATX) cases, the front panel power button was directly connected via a wiring harness back to the AT power supply and then, holding the button ensured the filter caps in the PSU were drained. The front panel power button did not connect to the AT motherboard.

But on ATX cases, the front power button is just a "remote" switch to a "momentary circuit" on the ATX motherboard that controls (via the +5Vsb standby voltage) the ATX power supply. And a momentary circuit is one that accepts the initial input when the button is pressed, but totally ignores the button setting after that, until it is pressed again. So holding down the button on a momentary circuit does nothing but tire your finger. ;)

Hopefully I did not cause any damage with the CMOS battery reset; I was very diligent about removing it straight out and straight back in to minimize any contact that might cause a surge
With the care you took, I highly doubt any damage was done. But note whenever I handle and install new batteries, I always put a clean sock over my hand to avoid touching the battery with bare fingers as skin oils promote corrosion, and grab on to any dust that wanders too close.
 
Dell Optiplex's across several GX models would do it. These same models are the ones that would keep date and time accurately until a power outage or they were unplugged to be moved. Of course that means the battery is bad but shutting them down and leaving them plugged in they would retain time and date.
 
Dell? That's the operative word here. Dell - the King of Proprietary even with PCs :mad: where there is an industry standard for a reason. If you say Dell did something non-standard, I believe it. I bet there were 10s of 1000s, if not more fried motherboards because users tried to replace their failed bastardized Dell PSUs with an industry standard ATX power supply. :(

But...
These same models are the ones that would keep date and time accurately until a power outage or they were unplugged to be moved. Of course that means the battery is bad but shutting them down and leaving them plugged in they would retain time and date.
Ummm, that sentence "implies" the bad battery is still in the circuit. You initially said, "with the battery removed".

I agree 100% if the dead battery is still in the circuit, the +5Vsb holds all sorts of circuits, including the CMOS circuit, alive until power is removed from the power supply. That said, I have to admit, I am not sure I have ever tried powering up a motherboard without a battery to see what would happen.
 
That was how we figured out the CMOS was also powered up by the 5vsb power(later confirmed by a Dell engineer as a "feature"), after seeing several older opti's that did not have any issues before being moved from one office to another we noticed the batteries were dead but they only lost the CMOS settings if they were also unplugged or there was a power outage.

More and more Dell has moved to standardized components but it seems they are always looking for a way to save 10 cents a unit usually to their own downfall.
 
and draining power by holding down the power button for 30 seconds.
That's an old wife's tale and does nothing. On old AT (before ATX) cases, the front panel power button was directly connected via a wiring harness back to the AT power supply and then, holding the button ensured the filter caps in the PSU were drained. The front panel power button did not connect to the AT motherboard.

But on ATX cases, the front power button is just a "remote" switch to a "momentary circuit" on the ATX motherboard that controls (via the +5Vsb standby voltage) the ATX power supply. And a momentary circuit is one that accepts the initial input when the button is pressed, but totally ignores the button setting after that, until it is pressed again. So holding down the button on a momentary circuit does nothing but tire your finger. ;)

Have to disagree from observation alone on this one. If I press the power button after removing all power connections from the surge protector/UPS, I see the system turn on for a brief second and then power down. It may not need to be pressed for 30 seconds, but it should be pressed to at least drain residual power. I can reproduce this on more than one desktop machine, so this is not just me doing something wrong in my build. HP, Dell, Gateway, etc. would have had to make the same exact mistake I did if that were the case, and I find that hard to believe.

Also, I know for a fact that there is a peripheral reset on laptops by holding the power button down for 15-30 seconds after removing all power supplies (AC Adapter and battery). I have fixed many boot issues this way on multiple machines from different manufacturers. HP even has a page for the steps and what they accomplish, which make sense for peripherals that have entered a bad power state and need to have the state reset, e.g. USB ports that were trying to prevent a surge. Granted, laptops are a different animal than desktops, so power draining may require the full 15-30 seconds as opposed to a desktop.

 
More and more Dell has moved to standardized components but it seems they are always looking for a way to save 10 cents a unit usually to their own downfall.
Agreed, but also, using non-standard parts forces users to buy replacement and upgrade parts only from Dell. Proprietary parts typically cost more and greatly restrict consumer options - often to the point they have no option, but buy a new computer. :(

@writhziden - please note I was referring to PCs only - not notebooks. My mistake for not making that clear.
 
Not a problem. I mostly just wanted other users to know who might read this thread. Appreciate you clarifying. :-}
 
Well, things kinda turned for the worse. I received two blue screens within a three day period. First one was after some updates were installed and I was doing some video converting; got a 0x3B that blamed my display driver with some storage controller drivers on the stack. My latest was during a restart that followed updates being installed, a restart, and then a second restart caused the blue screen 0x7E. The 0x7E had NTFS in the stack.

Both primary suspects from the bugcheck and the stack pointed to the display card or my SSD, but my SSD worked fine in my old system, so I pretty much ruled that out unless it was a compatibility issue with the new motherboard. The display card also did not seem likely since highly intensive graphics processes had not caused crashes minus the video conversion, which felt to me like a one-off event. That left me with my RAM as the next likely suspect given the storage based drivers in the stack; cache issues with RAM to disk or vice versa can lead to those showing up in the stack.

Running Memtest86+ on the 32GB of RAM showed a number of errors overnight, and that was only on two passes. I have subsequently removed two sticks of RAM and am testing the second set of 16GB (since the RAM errors appeared to occur in the first set of 16GB assuming Memtest86+ reads it in the order it shows up in the motherboard manual for 1-8 slots). If this set of RAM comes up clean, I'll move it to the first two slots and see if the slots are good. Hopefully they are since I would prefer to replace a set of RAM over a motherboard.
 
(since the RAM errors appeared to occur in the first set of 16GB assuming Memtest86+ reads it in the order it shows up in the motherboard manual for 1-8 slots)

Maybe, I've thought that in the past before too only to pair I though was good fail.......................
 
Yeah, I'm not putting too much faith into whether Memtest86+ reads the RAM from slot 1-8 and not from slot 8-1 or some other random order. Just figured it was worth starting with the pair in 1-2 removed in case Memtest86+ did read those as having failed. All errors occurred in the first 16GB it tested. If this set fails, I'll swap it with the other set and test again.
 
Running Memtest86+ on the 32GB of RAM showed a number of errors overnight, and that was only on two passes
It is my experience when Memtest86+ and/or Windows built-in checker finds an error (even just one), the RAM is bad. Period.

But it is also my experience those software based programs don't always find RAM problems and RAM can test for several passes and never toss an error, but then fail during normal operation, or when paired with other RAM that tests fine.

It could be the slot, but I can't remember the last time I found a bad slot. Pairs of slots, yes, but not a single slot - except when users try to force RAM in backwards, or DDR2 into a DDR3 slot. :(

Definitely, you should ensure all your timings and voltages (overclocking) are set to default settings until this is resolved.
 
I'll be doing a lot of testing over the coming weeks to find out which RAM module(s) resulted in errors. Based on the errors, it appears two of the modules are bad, but I cannot be 100% certain that was the case until I test individually. I want to first narrow down which module pair causes issues (if any). I have read reviews that this board tends to have some problems with multiple modules in it (64GB was what users posted in reviews), but whether that was due to the board itself or users not knowing proper troubleshooting procedures, I cannot be sure. It'll likely take me quite a bit of trial and error, testing, re-testing, etc. to figure out whether it is a module, a pair of modules, incompatible RAM/board (I've read that incompatible issues are less likely these days than they were in the past as long as one gets Intel RAM for Intel processors or AMD RAM for AMD processors; this is my first build in nearly ten years), or the board resulting in issues. Any advice on reducing time looking into this would be appreciated.

For what it's worth, I don't believe in overclocking. Seems like a bad idea to reduce hardware lifetimes for a slight increase in speed, but that's just my viewpoint.
 
Last edited:
For what it's worth, I don't believe in overclocking. Seems like a bad idea to reduce hardware lifetimes for a slight increase in speed, but that's just my viewpoint.
I am the same way. IMO, overclocking (today, anyway) is a marketing gimmick. As a hardware guy with some engineering background, I know designers don't build in overclocking abilities. They build to design, manufacturing, or materials limits, then the marketeers dummy down specs to add overclocking overhead.

What really irritates me is both AMD and Intel specify in their warranties that the warranties will be void if users subject the CPUs to conditions beyond the published specs - yet their marketing weenies tout their overclocking capabilities.

Worse, IMO, is ASUS, Gigabyte, MSI, Foxconn and all the other motherboard makers who not only provide overclocking capabilities with their boards, but often provide overclocking utilities to make it easy for users. Yet no motherboard maker will cover damage to CPUs (or RAM) if they are damaged due overclocking. While these utilities are safer than manual adjustments, they still add stress to the components, power supply, and cooling.

Then of course there are the users who overclock without giving the inevitable increase in heat generation a first, let alone a second thought. :(

IMO, if you want more power, buy it. Or at the very least, if you overclock, do your homework first and make sure you have addressed the added heat concerns before getting your "tweak" on.
 
@writhziden...

Double check your RAM clock and timing settings in the BIOS regardless of whether you OC'd or not...

If you remember, a while back, I was having issues with my PC... It ended up that the ASUS motherboard was running my RAM (1333) @ 1600... This continiued after multiple BIOS resets. I didn't notice it until after I manually went through and adjusted each setting...

I would say you are on the correct track by testing your DIMMs individually... Digerati is correct in one thing... RAM slot failure is extremely rare. And having complete channels (pairs of slots) is also uncommon.
 
So far, every time I run Memtest86+ on one of the two pairs of RAM, I get errors. On the other pair, no errors. I think it's fairly safe to say the set I originally removed is bad. I will continue testing, though. I am currently running the suspected good pair in the first two dual channel slots. I had been testing each pair in the last two dual channel slots previously. Tonight, I will test the suspected bad pair in the first two dual channel slots to see if I again get errors. If I do get errors, I will test each module individually; the usual binary search method. ;-}
 
I have narrowed the errors down to one module and have issued an RMA with Crucial to replace that module. The new module should be here by Friday or Monday. In the meantime, I am running with triple channel 24GB of RAM, which is sufficient for now. :-}
 
Alright, I have not yet placed the new RAM module into my system. "Why?" you ask.

I had another blue screen with the good RAM in my system. Same type of error as before pointing primarily to my storage controllers. My Intel chipset driver was showing up as out of date, 2011, for Windows 8. Gigabyte claims the driver should be dated 2012/10/23, but WinDbg gives the date in 2011 (can't remember the exact date as I don't have it in front of me at the moment). I've updated the driver, and so far so good for a couple weeks.

I'm going to wait another month before I install the new memory just to prevent new variables from throwing a wrench in the works.
 

Has Sysnative Forums helped you? Please consider donating to help us support the site!

Back
Top