A Sky, cable and digital tv forum. Digital TV Banter

If this is your first visit, be sure to check out the FAQ by clicking the link above. You may have to register before you can post: click the register link above to proceed. To start viewing messages, select the forum that you want to visit from the selection below.

Go Back   Home » Digital TV Banter forum » Digital TV Newsgroups » uk.tech.digital-tv (Digital TV - General)
Site Map Home Register Authors List Search Today's Posts Mark Forums Read Web Partners

uk.tech.digital-tv (Digital TV - General) (uk.tech.digital-tv) Discussion of all matters technical in origin related to the reception of digital television transmissions, be they via satellite, terrestrial or cable. Advertising is forbidden, with no exceptions.

computer slow



 
 
Thread Tools Display Modes
  #1  
Old April 21st 15, 09:53 PM posted to uk.tech.digital-tv
Johnny B Good[_2_]
external usenet poster
 
Posts: 459
Default computer slow

On Mon, 20 Apr 2015 18:15:46 +0100, Woody wrote:

====snip====


If none of those help then a re-install will as the pagefile will be
replaced amongst other things. Mine - on XP SP3 - is well over 2GB and
Windoze has to keep searching it. On a new install it will start a a few
MB before it grows.


You've hit the nail right on the head, but only by accident and for all
the wrong reasons.

What a 'pagefile' aka 'swapfile' or, beter yet, swap *partition* should
do, in a properly designed multi-tasking general purpose computer OS is
*not* to grow in size without manual intervention.

Sane OSEs, like Linux and BSD, will use a *fixed* sized swap area (file,
or better still, a dedicated partition space) which doesn't have the
additional and totally unnecessary overheads involved with resizing the
swap(page)file on-the-fly and the associated growing of fragmentated free
disk space to add to the existing woes of 'Fragmentation Hell' so typical
of a windows installation on a single huge partition 'all your eggs in
one basket(case)' scenario universally favoured by the lazy ****wit OEMs
determined to toe Microsoft's party line.

Microsoft's only motivation for "offering" the extra pagefile
configuration options is to disarm any criticism of their install
defaults that are quite clearly aimed at maximising the rate at which the
performance will drop off with use due to fragmentation hell in order to
con the typical (and gullible) consumer into believing that PC hardware
"naturally" 'gets tired' and requires replacement after just 12 to 18
months.

The weird and, quite frankly, nonsensical "System Recommended" option
whereby the pagefile is "Dynamically resized to suit demand" only exists
to accelerate the loss of performance with disk usage.

The very first thing you should do after the initial installation of
*any* version of windows is to go to advanced settings and manually
configure the pagefile's min and max values to equal sizes, typically a
figure between 1 and 2 times the installed ram. This should give you a
pagefile with a minimum of initial fragmentation with no further
fragmentation activity effects (on either itself or the free space on the
partition it lives on).

The one thing that should *not* happen with a properly installed windows
is a pagefile that starts at a small size, growing in size with use.

However, the advent of SSDs for installing the OS and basic software
onto, has largely destroyed Microsoft's intended effect on system
performance via pagefile fragmentation activity. Even so, it is still
good practice to change this kakamaimee pagefile option setting to a more
sane setting which results in a fixed size pagefile as per the practice
common to alternative OSen.

Whilst you're at it you might as well change the kakamaimee windows
update defaults from "I'm a Lazy **** of a Consumer, automatically look
for, download and install without my express permission" option to the
one where it only looks for and *tells* you when the next batch of
updates are available for you to download at *your* convenience and tell
you when such downloads have completed so that you can apply them when it
suits you to do so.

Life's full of enough unpleasant surprises as it is without adding the
woes of Microsoft updates to ambush you when you least expect and at the
most inconvenient moments possible.

If you want to search out any other system settings options to play
around with, concentrate on those which Microsoft have labelled as
"recommended" and used as the default. Up to now I haven't come across
any such "recommended" settings where prefixing the word "Not" would more
accurately describe the actual effect from the user's point of view in
terms of their sanity and patience.

As for all the many other sources of 'sluggish computer' syndrome, I'll
leave it to Gary to run SpyBot S&D, MBAM Free and a selection of other
crap cleansing utilities available from bleepingcomputer.com such as
ComboFix for example. This last recommendation, ComboFix, seems to deal
with the latest zero day threats that all other 57 varieties[1] of
antivirus products available to "The Consumer" seem to studiously avoid
detecting so would be first choice to run before applying the lesser
abled AV product to the task.

[1] At the last count, there were 55 alternative AV scanning engines in
use by the VirusTotal web site. If Heinz were selling AV products instead
of food stuffs, that claim to having 57 varieties available would be an
order of magnitude more truthful than it ever was for the foodstuff
product lines they lay claim to today.


--
Johnny B Good
  #2  
Old April 21st 15, 10:56 PM posted to uk.tech.digital-tv
Richard Tobin
external usenet poster
 
Posts: 1,381
Default computer slow

In article ,
Johnny B Good wrote:

Sane OSEs, like Linux and BSD, will use a *fixed* sized swap area (file,
or better still, a dedicated partition space) which doesn't have the
additional and totally unnecessary overheads involved with resizing the
swap(page)file on-the-fly and the associated growing of fragmentated free
disk space


Thirty years ago the overhead of using a normal file for swap might
have been significant; now it isn't. Dynamically-sized swap space in
files should cause no more filesystem fragmentation than any other
files.

If your complaints about Windows are right, they reflect the poor
Windows implementation. MacOS X creates additional 1GB swap files
as required, and it works well.

And if swapping affects your performance at all, it means that you
haven't got enough memory for the programs you're running. Today swap
serves as space for currently inactive program memory and for
hibernation. If you're actually paging significantly in normal
use performance is going to be awful regardless of how swap space
is allocated.

-- Richard
  #3  
Old April 22nd 15, 08:33 PM posted to uk.tech.digital-tv
Johnny B Good[_2_]
external usenet poster
 
Posts: 459
Default computer slow

On Tue, 21 Apr 2015 21:56:08 +0000, Richard Tobin wrote:

In article ,
Johnny B Good wrote:

Sane OSEs, like Linux and BSD, will use a *fixed* sized swap area
(file,
or better still, a dedicated partition space) which doesn't have the
additional and totally unnecessary overheads involved with resizing the
swap(page)file on-the-fly and the associated growing of fragmentated
free disk space


Thirty years ago the overhead of using a normal file for swap might have
been significant; now it isn't. Dynamically-sized swap space in files
should cause no more filesystem fragmentation than any other files.


The point you seem to be missing is the contribution of free space
fragmentation made by such dynamic resizing activity of a pagefile (aka
swapfile) which aggravates the fragmentation created by other file
activity. Not only do you have the fragmentation effect to consider, you
also have the extra processing overheads involved in the actual dynamic
resizing algorithm itself.

These days, disk space isn't at such a premium that you can't afford to
tie up several gigabyte's worth (perhaps 12GB's worth in a system with 6
to 12GB of ram). If you're going to effectively permanently tie up 12GB's
worth in a special system file, it's best all round if it occupies one
large single contiguous chunk of sectors (even if it resides on an SSD
where fragmentation has far less, though not zero, effect on performance).



If your complaints about Windows are right, they reflect the poor
Windows implementation. MacOS X creates additional 1GB swap files as
required, and it works well.


I'm not familiar with MacOS's virtual memory algorithms. That
"additional 1GB swap files" seems a very specific size. In windows, you
can specify more than just one pagefile, provided you have more than just
a single partition space in which to create them. And, when this option
is available, you can specify what size each of these pagefiles can be.


And if swapping affects your performance at all, it means that you
haven't got enough memory for the programs you're running. Today swap
serves as space for currently inactive program memory and for
hibernation. If you're actually paging significantly in normal use
performance is going to be awful regardless of how swap space is
allocated.


Yes, awfully bad performance is the price paid for being able to open
more apps and files than is good for you without crashing or locking the
system up due to unexpected shortage of ram (funnily enough, a
consequence of *not* disabling dynamic swapfile sizing on a win98 box
close to running out of disk space - worse still in *this* case since it
often resulted in a well and truly fried win98 box requiring some non-
trivial effort to restore back to a working state).

Guess what? All multi-user/multi-tasking OSes use virtual memory for
exactly the same reasons and with just about the same performance trade
offs.

It turns out that doing without a pagefile at all on the grounds that,
let's say for example, a win7 64 bit system with a whopping 32 or even 64
GB of ram installed, is actually not a good idea. Contrary to what you
may have been told, virtual memory on disk storage volumes is used as a
mechanism to maintain performance in the face of overly ambitious use of
multiple applications and data files even when the system has "More than
sufficient ram" to do the job.

It's true that HDD hosted virtual memory is about 3 orders of magnitude
slower than real ram but "slow" is infinitely preferable to "stop".
However, that's only of importance in (hopefully) edge cases. The more
usual way virtual memory is used to maintain performance is to provide
the OS with a mechanism to satisfy an app's ofttimes outrageous pre-
emptive memory allocation demands by allocating from the pagefile pool
rather than the ram pool, thus keeping as much of the real ram available
for active processes.

If all the apps opened were to be allowed to make a "Land Grab" from
ram, the system could find itself having to juggle ram and pagefile soon
enough anyway. Quite often, an app may never actually call upon any of
its pre-empted memory allocation or only call upon a tiny fraction. The
worst offenders being, typically, the crap apps that insist on loading
component parts of themselves at boot up time just to speed up their
start up times should the user actually want to make use of their
features.

Windows can manage virtual memory just as well as any other OS provided
the kakamaimee settings are disabled in favour of a single fixed sized
page file per disk drive. What happens after that is basically down to
how hard the user tries to make their PC work in relation to how well it
is specced up for the tasks involved.

Although there are many reasons for windows to slow down to a
standstill, other than by reason of extreme fragmentation of the file
system, reconfiguring the pagefile settings for maximum performance/
minimum fragmentation of the FS is still a primary requirement, even when
SSDs are used, to slow down the inevitable tarnishing of the initial
shine of a freshly installed windows OS.

For those still using an HDD to support the OS and apps, the pagefile
settings have been Microsoft's main trick to maintaining the myth that PC
hardware rapidly becomes tired (typically within a matter of 12 to 18
months from initial purchase). It's one whereby, if the consumers were
ever to start complaining (As if they would!), Microsoft have a virtually
"Instant Fix" ready to deploy at the drop of a hat should the worst come
to the worst. Naturally enough, such a fix would only be applied in
extremis and, thus far, remains unused.

One final point to consider as to why a fixed size pagefile is so
beneficial in this fight against performance decline is the massive churn
of system files due to the endless windows update cycle. Without this
latter factor, the dynamic pagefile algorithm wouldn't be quite as
effective as Microsoft would desire in order to keep their OEM partners
happy in creating an artificial demand for higher performance, less tired
replacement kit.

It's only in recent years that the benefit of SSDs is now sabotaging
Microsoft's efforts to persuade consumers to upgrade at a higher
frequency than is strictly necessary (or forcing the more savvy consumers
into re-installing windows in a vain attempt to 'get back to the good old
days' of a fresh system along with all the pain of dealing with yet
another round of service pack upgrades and hotfixes).

Microsoft's munificence in providing service pack upgrades and hotfixes
*for free* rather serendipitously works in their favour as far as the
primary function of defaulting the pagefile behaviour to their
"Recommended" setting goes. If you think for one moment that anything
labelled by Microsoft as "Recommended" is to your benefit, think again.

Microsoft, like all of the major commercial corporations dealing with
consumers as their target market demographic are only interested in one
thing, making money as cheaply as possible. Their attitude to this market
demographic can be neatly summed up by PT Barnum's saying, "Never give a
sucker an even break."

Every fresh installation of Microsoft Windows comes pre-sabotaged. A
savvy user will know to roll their sleeves up and get down and dirty with
the inner workings of the machine to remove all those unwanted sabots
before putting it to productive use. The pagefile settings are just one
of many such sabotaged settings that need to be dealt with tut suit (it
just happens to be the most significant one as far as the phenomenon of
system slowdown is concerned).

--
Johnny B Good
  #4  
Old April 22nd 15, 11:31 PM posted to uk.tech.digital-tv
Phi
external usenet poster
 
Posts: 288
Default computer slow

"Johnny B Good" wrote in message
...
On Tue, 21 Apr 2015 21:56:08 +0000, Richard Tobin wrote:

In article ,
Johnny B Good wrote:

Sane OSEs, like Linux and BSD, will use a *fixed* sized swap area
(file,
or better still, a dedicated partition space) which doesn't have the
additional and totally unnecessary overheads involved with resizing the
swap(page)file on-the-fly and the associated growing of fragmentated
free disk space


Thirty years ago the overhead of using a normal file for swap might have
been significant; now it isn't. Dynamically-sized swap space in files
should cause no more filesystem fragmentation than any other files.


The point you seem to be missing is the contribution of free space
fragmentation made by such dynamic resizing activity of a pagefile (aka
swapfile) which aggravates the fragmentation created by other file
activity. Not only do you have the fragmentation effect to consider, you
also have the extra processing overheads involved in the actual dynamic
resizing algorithm itself.

These days, disk space isn't at such a premium that you can't afford to
tie up several gigabyte's worth (perhaps 12GB's worth in a system with 6
to 12GB of ram). If you're going to effectively permanently tie up 12GB's
worth in a special system file, it's best all round if it occupies one
large single contiguous chunk of sectors (even if it resides on an SSD
where fragmentation has far less, though not zero, effect on performance).



If your complaints about Windows are right, they reflect the poor
Windows implementation. MacOS X creates additional 1GB swap files as
required, and it works well.


I'm not familiar with MacOS's virtual memory algorithms. That
"additional 1GB swap files" seems a very specific size. In windows, you
can specify more than just one pagefile, provided you have more than just
a single partition space in which to create them. And, when this option
is available, you can specify what size each of these pagefiles can be.


And if swapping affects your performance at all, it means that you
haven't got enough memory for the programs you're running. Today swap
serves as space for currently inactive program memory and for
hibernation. If you're actually paging significantly in normal use
performance is going to be awful regardless of how swap space is
allocated.


Yes, awfully bad performance is the price paid for being able to open
more apps and files than is good for you without crashing or locking the
system up due to unexpected shortage of ram (funnily enough, a
consequence of *not* disabling dynamic swapfile sizing on a win98 box
close to running out of disk space - worse still in *this* case since it
often resulted in a well and truly fried win98 box requiring some non-
trivial effort to restore back to a working state).

Guess what? All multi-user/multi-tasking OSes use virtual memory for
exactly the same reasons and with just about the same performance trade
offs.

It turns out that doing without a pagefile at all on the grounds that,
let's say for example, a win7 64 bit system with a whopping 32 or even 64
GB of ram installed, is actually not a good idea. Contrary to what you
may have been told, virtual memory on disk storage volumes is used as a
mechanism to maintain performance in the face of overly ambitious use of
multiple applications and data files even when the system has "More than
sufficient ram" to do the job.

It's true that HDD hosted virtual memory is about 3 orders of magnitude
slower than real ram but "slow" is infinitely preferable to "stop".
However, that's only of importance in (hopefully) edge cases. The more
usual way virtual memory is used to maintain performance is to provide
the OS with a mechanism to satisfy an app's ofttimes outrageous pre-
emptive memory allocation demands by allocating from the pagefile pool
rather than the ram pool, thus keeping as much of the real ram available
for active processes.

If all the apps opened were to be allowed to make a "Land Grab" from
ram, the system could find itself having to juggle ram and pagefile soon
enough anyway. Quite often, an app may never actually call upon any of
its pre-empted memory allocation or only call upon a tiny fraction. The
worst offenders being, typically, the crap apps that insist on loading
component parts of themselves at boot up time just to speed up their
start up times should the user actually want to make use of their
features.

Windows can manage virtual memory just as well as any other OS provided
the kakamaimee settings are disabled in favour of a single fixed sized
page file per disk drive. What happens after that is basically down to
how hard the user tries to make their PC work in relation to how well it
is specced up for the tasks involved.

Although there are many reasons for windows to slow down to a
standstill, other than by reason of extreme fragmentation of the file
system, reconfiguring the pagefile settings for maximum performance/
minimum fragmentation of the FS is still a primary requirement, even when
SSDs are used, to slow down the inevitable tarnishing of the initial
shine of a freshly installed windows OS.

For those still using an HDD to support the OS and apps, the pagefile
settings have been Microsoft's main trick to maintaining the myth that PC
hardware rapidly becomes tired (typically within a matter of 12 to 18
months from initial purchase). It's one whereby, if the consumers were
ever to start complaining (As if they would!), Microsoft have a virtually
"Instant Fix" ready to deploy at the drop of a hat should the worst come
to the worst. Naturally enough, such a fix would only be applied in
extremis and, thus far, remains unused.

One final point to consider as to why a fixed size pagefile is so
beneficial in this fight against performance decline is the massive churn
of system files due to the endless windows update cycle. Without this
latter factor, the dynamic pagefile algorithm wouldn't be quite as
effective as Microsoft would desire in order to keep their OEM partners
happy in creating an artificial demand for higher performance, less tired
replacement kit.

It's only in recent years that the benefit of SSDs is now sabotaging
Microsoft's efforts to persuade consumers to upgrade at a higher
frequency than is strictly necessary (or forcing the more savvy consumers
into re-installing windows in a vain attempt to 'get back to the good old
days' of a fresh system along with all the pain of dealing with yet
another round of service pack upgrades and hotfixes).

Microsoft's munificence in providing service pack upgrades and hotfixes
*for free* rather serendipitously works in their favour as far as the
primary function of defaulting the pagefile behaviour to their
"Recommended" setting goes. If you think for one moment that anything
labelled by Microsoft as "Recommended" is to your benefit, think again.

Microsoft, like all of the major commercial corporations dealing with
consumers as their target market demographic are only interested in one
thing, making money as cheaply as possible. Their attitude to this market
demographic can be neatly summed up by PT Barnum's saying, "Never give a
sucker an even break."

Every fresh installation of Microsoft Windows comes pre-sabotaged. A
savvy user will know to roll their sleeves up and get down and dirty with
the inner workings of the machine to remove all those unwanted sabots
before putting it to productive use. The pagefile settings are just one
of many such sabotaged settings that need to be dealt with tut suit (it
just happens to be the most significant one as far as the phenomenon of
system slowdown is concerned).

--
Johnny B Good



I can't multitask so do not need lots of programmes running simultaeously
and my OS runs very quickly without a pagefile.

  #5  
Old April 23rd 15, 02:09 PM posted to uk.tech.digital-tv
Johnny B Good[_2_]
external usenet poster
 
Posts: 459
Default computer slow

On Wed, 22 Apr 2015 23:31:03 +0100, Phi wrote:

"Johnny B Good" wrote in message
...


====snip====

Every fresh installation of Microsoft Windows comes pre-sabotaged. A
savvy user will know to roll their sleeves up and get down and dirty
with the inner workings of the machine to remove all those unwanted
sabots before putting it to productive use. The pagefile settings are
just one of many such sabotaged settings that need to be dealt with tut
suit (it just happens to be the most significant one as far as the
phenomenon of system slowdown is concerned).



I can't multi-task so do not need lots of programmes running
simultaneously and my OS runs very quickly without a pagefile.


Even assuming you've totally disabled crash dumps and hibernation
(assuming a desktop PC)[1] as well as disabling the pagefile, I rather
doubt that windows has entirely disabled paging (even if it only creates
a modestly sized pagefile of just a hundred or so MBs) so (certainly for
win2k and XP) you won't be running without any pagefile at all.

It's just possible that the more broken successors (Vista, win7 & 8) do
actually allow total elimination of any sort of pagefile once you've done
all the necessary system changes required to permit this option but I
suspect not.

*You* might not be "Multitasking" but the system certainly will be. You
can eliminate a lot of the "Multitasking" workload typically introduced
by crapware such as AV, Adobe's Reader (PDF) and FlashPlayer bits, the
sound chip and graphic card system tray control applets and shedloads of
other stuff like Apple's Quicktime and Realtime applets and plenty more
besides before you can safely say you won't be needing the services of a
pagefile on a system well endowed with ram.

Yes, it's true enough that restricting the memory resources to only ram
will speed things up but windows will effectively do that anyway even if
you've specified a 12GB fixed size pagefile. The only saving in overhead
by forcing an absolutely minimally sized pagefile on the system is that
the preemptive memory requests made by apps will now be serviced directly
from ram space instead of indirectly via the pagefile. I honestly can't
say how much performance improvement this will offer but my suspicion is
that it won't be a lot.

If you have a very tightly defined usage profile in mind, this can help
to marginally improve performance otherwise it's likely to create more
problems than it's worth.

This all, of course, assumes we're discussing Microsoft Windows OSen.
It's a different kettle of fish as far as *nix based OSes are concerned
(My NAS4Free box is quite happily running without a swapfile or SWAP
partition - however, that's a very tightly defined usage case for the
version of FreeBSD that it's built upon).

[1] Whilst you're messing about in this area, you might also take a
moment to disable that server centric feature known as "Automatically
Restart on System Failure". A sensible option on a server where it just
might save an expensive on-site visit simply to press a reset button but
for a desktop PC??? WTF's wrong with a manual reset (after taking note of
the error code displayed by the BSOD that would otherwise vanish from
sight before you've had a chance, even to take a photo)?

--
Johnny B Good
  #6  
Old April 23rd 15, 02:53 PM posted to uk.tech.digital-tv
NY
external usenet poster
 
Posts: 1,232
Default computer slow

"Johnny B Good" wrote in message
...
[1] Whilst you're messing about in this area, you might also take a
moment to disable that server centric feature known as "Automatically
Restart on System Failure". A sensible option on a server where it just
might save an expensive on-site visit simply to press a reset button but
for a desktop PC???


The only time when it's useful for a desktop PC is if you access it remotely
when you're on holiday, via TeamViewer or similar. I've set the BIOS to
start up whenever the power is reapplied after a power cut, in case of a
power glitch while I'm away. I use my PC as a video recorder so it's useful
to be able to access it when I'm on holiday in case I want to schedule any
more programmes to record, and I like to know that even if the power goes
off, anything I've set will still record once the power comes back;
similarly that the PC will reboot rather than hanging indefinitely if it
crashes.

Even this doesn't guard against operator stupidity - I once turned off the
mains supply to my TV, forgetting that this also turned off the aerial
amplifier. I got loads of recordings of "no signal" :-( Didn't make that
mistake again...

  #7  
Old April 23rd 15, 03:20 PM posted to uk.tech.digital-tv
Johnny B Good[_2_]
external usenet poster
 
Posts: 459
Default computer slow

On Thu, 23 Apr 2015 14:53:00 +0100, NY wrote:

"Johnny B Good" wrote in message
...
[1] Whilst you're messing about in this area, you might also take a
moment to disable that server centric feature known as "Automatically
Restart on System Failure". A sensible option on a server where it just
might save an expensive on-site visit simply to press a reset button
but for a desktop PC???


The only time when it's useful for a desktop PC is if you access it
remotely when you're on holiday, via TeamViewer or similar. I've set the
BIOS to start up whenever the power is reapplied after a power cut, in
case of a power glitch while I'm away. I use my PC as a video recorder
so it's useful to be able to access it when I'm on holiday in case I
want to schedule any more programmes to record, and I like to know that
even if the power goes off, anything I've set will still record once the
power comes back; similarly that the PC will reboot rather than hanging
indefinitely if it crashes.


I'm sure that you appreciate that the chances of such reboots actually
succeeding are slim to none. However, slim is better than no chance at
all. :-)


Even this doesn't guard against operator stupidity - I once turned off
the mains supply to my TV, forgetting that this also turned off the
aerial amplifier. I got loads of recordings of "no signal" :-( Didn't
make that mistake again...


I've been caught out by that. In this case it's down to the fact that
the mains powered 4 way aerial distribution amp in the attic is powered
from a 13A socket fed from the Basement, 1st & 2nd floor lighting circuit
(the ground floor lighting circuit has a fuse all of its own in the CU)
which means a loss of power whenever I've had to pull the fuse in the CU
to do work on this circuit and forgotten to make provision for an
alternative power feed (using a long extension socket lead from the
nearest ring main socket - Doh!). What's worse is I've been caught out by
this oversight more than once. :-(

--
Johnny B Good
  #8  
Old April 25th 15, 12:45 PM posted to uk.tech.digital-tv
tony sayer
external usenet poster
 
Posts: 4,987
Default computer slow


It's just possible that the more broken successors (Vista, win7 & 8) do
actually allow total elimination of any sort of pagefile once you've done
all the necessary system changes required to permit this option but I
suspect not.


What's broken in say WIN 7, works fine here does all I need so what's
broken?.

OK I can understand that Vista was a pile of pants but 7's fine...

*You* might not be "Multitasking" but the system certainly will be. You
can eliminate a lot of the "Multitasking" workload typically introduced
by crapware such as AV, Adobe's Reader (PDF) and



FlashPlayer bits,


Have to have that stuff for a lot of sites isn't HTML 5 supposed to be
replacing it?..

Adobe PDF has been got rid of to be replaced by Foxit reader which works
fine..

the
sound chip and graphic card system tray control applets and shedloads of
other stuff like Apple's Quicktime and Realtime applets and plenty more
besides before you can safely say you won't be needing the services of a
pagefile on a system well endowed with ram.


Nothing apple here thanks..


--
Tony Sayer



  #9  
Old April 26th 15, 02:53 PM posted to uk.tech.digital-tv
Johnny B Good[_2_]
external usenet poster
 
Posts: 459
Default computer slow

On Sat, 25 Apr 2015 23:47:28 +0200, Martin wrote:

On Sat, 25 Apr 2015 12:45:05 +0100, tony sayer
wrote:


It's just possible that the more broken successors (Vista, win7 & 8)
do
actually allow total elimination of any sort of pagefile once you've
done all the necessary system changes required to permit this option
but I suspect not.


What's broken in say WIN 7, works fine here does all I need so what's
broken?.

OK I can understand that Vista was a pile of pants but 7's fine...


No problems with Win 7 here either.


Believe me, win7's desktop GUI is even more dire than Vista's which
could at least be reverted to the look and feel of winXP (but not win2k).
Now, in this case, "Classic" just means the default GUI setting for Vista
which is an abomination as far as working with the file system is
concerned.

The major issue with Vista was that it was released prematurely. The
problems pretty well all disappear if you do a fresh installl from an SP2
slipstreamed DVD. The resulting installation is remarkably lightweight on
active services (about 45 or so compared to winXP's 40 and win7's 60 odd).

I know this from running a fresh Vista install against the CoA product
key on one or two Vista desktop machines. I was, to say the least, quite
surprised at this. My previous experiences involving fresh or repair
installs using the original Vista release had been far from pleasant
experiences indeed.

I guess for consumers, win7 is probably all they need in terms of their
level of usability requirements, no doubt offering some improvements over
its predecessor, notwithstanding that most consumers' previous
experiences with Vista were largely coloured by its brokenness upon first
release to the great unwashed.

I keep seeing this "blind sidedness' in regard of what's so ****e about
winXP and later versions of NT due to the deliberate restriction of win2k
to, essentially, corporate and SME customers.

The vast majority of computer 'consumers' only had the experience of
win98/winME as their 'benchmark', a very low benchmark indeed as far as
it compares to "The Wonder of WinXP" and its successors.

Win2K represented a pinnacle of development in desktop GUIs which was
immediately suppressed by the consumer grade versions (winXP and up)
which rapidly replaced it within its first year of release.

Yes, it was possible to configure winXP's desktop GUI to *look* like a
win2K desktop (but most definitely not its 'feel'). The really sad thing
is that MS went out of their way to lobotomise explorer, removing the
intelligent folder window resizing algorithm when the "Open each folder
in its own window." option was set.

For users who, like me, preferred to make the most of the GUI concept
and open each folder on the desktop (made more appealing by the advent of
affordable monitors that could offer a lot more screen real estate than
the ancient 1024x768 'hi-res' monitors used at the turn of the century)
this lobotomy seriously ****ed up this particular way of using the
explorer file manager.

It's quite clear now that MS lost interest in the dwindling band of
computer users, preferring to service the consumer market upon whom they
must have felt such 'ease of use' features would be a waste of resources
(tiny as that was). The ethos must have been "Don't provide the consumer
with useful tools, just make it as simple as possible for them to install
paid for software packages and consume paid for multimedia content as
fast as their heart's desire allows.".

WinXP's desktop GUI was the thin end of the wedge of an interface
designed to ultimately simplify the PC down to the level of "A Magic Box"
with not USB ports or optical drive slots but simply endowed with "Magic
Potion Portals" ready to allow consumers to *buy* whatever software
solutions (Apps) they being offered to satisfy their consumeristic needs.

I thought Win7 had gotten pretty close to this sad state of affairs
until win8 was unleashed on the great unwashed where, it turns out, that
even a goodly portion of this 'Great Unwashed' felt it was a step too
far, forcing MS to backpeddle in regard of the Desktop PC GUI.

Sadly, it seems MS are absolutely determined to **** off the computer
user class of consumer. Interestingly, I note that one Linux distro's USP
is the ability to customise the desktop GUI to emulate either win7, winXP
or Gnome2 in the free versions of Zorin http://zorin-os.com/tour.html
but you have to *pay* for the premium version in order to extend this
range of alternatives to include win2K.

Quoting from their web site:

"Easy to use, familiar desktop

The main goal of Zorin OS is to give new users easy access to Linux. That
is why Zorin OS incorporates the familiar Windows 7-like interface by
default to dramatically reduce the learning curve of this system while
still experiencing the main advantages of Linux. You can also utilise the
desktop with other interfaces. This is thanks to the exclusive Zorin Look
Changer which lets you change your desktop to look and act like either
Windows 7, Windows XP or GNOME 2 in the free versions of Zorin OS. The
Premium editions also include the Windows 2000, Unity and Mac OS X looks."

There's obviously enough demand from the more discerning users of
desktop PCs that they should feel they can get away with imposing a
financial penalty upon such 'customers'. It seems I'm not the only one
who has this crazy idea that win2k was MS's crowning achievement in
desktop GUI functionality.


--
Johnny B Good
  #10  
Old April 26th 15, 03:44 PM posted to uk.tech.digital-tv
Roderick Stewart[_3_]
external usenet poster
 
Posts: 2,178
Default computer slow

On Sun, 26 Apr 2015 13:53:47 GMT, Johnny B Good
wrote:

Interestingly, I note that one Linux distro's USP
is the ability to customise the desktop GUI to emulate either win7, winXP
or Gnome2 in the free versions of Zorin http://zorin-os.com/tour.html
but you have to *pay* for the premium version in order to extend this
range of alternatives to include win2K.

Quoting from their web site:

"Easy to use, familiar desktop

The main goal of Zorin OS is to give new users easy access to Linux. That
is why Zorin OS incorporates the familiar Windows 7-like interface by
default to dramatically reduce the learning curve of this system while
still experiencing the main advantages of Linux. You can also utilise the
desktop with other interfaces. This is thanks to the exclusive Zorin Look
Changer which lets you change your desktop to look and act like either
Windows 7, Windows XP or GNOME 2 in the free versions of Zorin OS. The
Premium editions also include the Windows 2000, Unity and Mac OS X looks."


You might also consider Ubuntu-MATE, which is now one of the official
releases. It has a choice of three different menu styles, one of which
can be made to look very much like the "classic" menu in Windows XP.

And if you want something that looks a bit like a Mac desktop, you
should have a look at Elementary.

It's generally far easier to get people to accept something new if it
has features that make it similar to what they already know.

Rod.
 




Thread Tools
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

vB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Forum Jump


All times are GMT. The time now is 09:03 AM.


Powered by vBulletin® Version 3.6.4
Copyright ©2000 - 2017, Jelsoft Enterprises Ltd.SEO by vBSEO 2.4.0
Copyright 2004-2017 Digital TV Banter.
The comments are property of their posters.