[go: nahoru, domu]

Jump to content

Wikipedia:Reference desk/Computing

From Wikipedia, the free encyclopedia
Welcome to the computing section
of the Wikipedia reference desk.
Select a section:
Want a faster answer?

Main page: Help searching Wikipedia

   

How can I get my question answered?

  • Select the section of the desk that best fits the general topic of your question (see the navigation column to the right).
  • Post your question to only one section, providing a short header that gives the topic of your question.
  • Type '~~~~' (that is, four tilde characters) at the end – this signs and dates your contribution so we know who wrote what and when.
  • Don't post personal contact information – it will be removed. Any answers will be provided here.
  • Please be as specific as possible, and include all relevant context – the usefulness of answers may depend on the context.
  • Note:
    • We don't answer (and may remove) questions that require medical diagnosis or legal advice.
    • We don't answer requests for opinions, predictions or debate.
    • We don't do your homework for you, though we'll help you past the stuck point.
    • We don't conduct original research or provide a free source of ideas, but we'll help you find information you need.



How do I answer a question?

Main page: Wikipedia:Reference desk/Guidelines

  • The best answers address the question directly, and back up facts with wikilinks and links to sources. Do not edit others' comments and do not give any medical or legal advice.
See also:


October 29

[edit]

Late '90s/early 2000s MIDI generator?

[edit]

Hello, I'm sure I've asked this before, at least once and likely twice or more, several years ago, but I've lost track of the answer in the interim so I'm asking again.

I'm looking for a procedural MIDI generation software that existed probably by the year 2000. I do not remember what the name was; it was something similar to, but not, 'DirectMusic Composer'. It was relatively limited, but easy to use. When using it, you could pick from a selection of styles; for each style, you could select from a list of instrument sets, and a list of 'moods' that were available for that style, and you could move the instruments around on a 2D square to make them quieter, louder, or more towards the left or right side; you could set the song length, and enable or disable intro and outtro; and you could export the results to MIDI.

The program was used to create some part of the music for the turn-of-the-millennium MMO "Graal Online"; consequently, examples of what the MIDI music output could sound like can be found here (played with the default Windows soundfont) and in this playlist (played with a different soundfont).

Can you help me figure out what this program was, please?

2600:6C55:4A00:A18:A1E5:7EDB:844:C2C4 (talk) 23:49, 29 October 2024 (UTC)[reply]

Found it. Microsoft Music Producer. Thank you! 2600:6C55:4A00:A18:A1E5:7EDB:844:C2C4 (talk) 00:05, 30 October 2024 (UTC)[reply]


October 31

[edit]

Unsafe connection

[edit]

How comes I get an unsafe connection message for an https:// site? The address bar shows https://www.--.--.--. But the on-screen message is The connection has timed out / An error occurred during a connection to www.---.--.--. The strange thing is that the address bar says it's a secure site, but the error message doesn't (and nor does Firefox's site information). Thoughts? SerialNumber54129 16:38, 31 October 2024 (UTC)[reply]

The error message does not specify the communication protocol. This does not imply the protocol was less secure.  --Lambiam 20:56, 31 October 2024 (UTC)[reply]

November 1

[edit]

360 street view images

[edit]

Are there any cars with cameras that are built-in in a way such that you can extract 360 street view images from them? Like Google street view, but without any added parts. ―Panamitsu (talk) 07:00, 1 November 2024 (UTC)[reply]

Most likely not. Tesla is an example. It is covered in cameras for full view, but the cameras are 720p, which is too low quality to get usable images as the vehicle speeds down the road. There isn't much reason for them to have better cameras. So, why would any other vehicle have high quality cameras all around? 68.187.174.155 (talk) 12:35, 1 November 2024 (UTC)[reply]
Formula One and other racing cars often have multiple cameras, used for both TV broadcast views and for monitoring body parts at speed; some are mandatory and some optional. It may be that a particular one could have sufficient that a computer could reconstruct a 360° view (though this would not be routinely done). However, this is a very special case doubtless outside your scope of enquiry. {The poster formerly known as 87.81.230.195} 94.6.86.81 (talk) 13:20, 1 November 2024 (UTC)[reply]

November 2

[edit]

Please Simplify - What is Answer Engine Optimization (AEO)?

[edit]

Please help me to list down the resourses from where I can learn more on What is Answer Engine Optimization (AEO)? be it a course or from any blog - Thanks in Advance MPBhopal (talk) 11:11, 2 November 2024 (UTC)[reply]

@MPBhopal Maybe the same as query optimization? Shantavira|feed me 16:32, 2 November 2024 (UTC)[reply]
The term Answer engine redirects to our article Question answering, which needs to be updated. This page about the Brave browser's answer engine might make a good source.

An answer engine is a system that tries to answer a question, rather than point to websites about the question. Thanks to the proliferation and quality of large language models (LLMs), search-integrated answer engines are now a possibility at scale. In fact, several companies that operate search engines have released similar systems (including Bing Copilot and Google Gemini).

Crucially, these LLM-type answer engines rely on Retrieval-augmented generation:

The secret ingredient of an answer engine is not the LLM that powers it [...] an effective answer engine requires both a model and access to a search engine.

So Answer Engine Optimization is a branch of Search Engine Optimization. Here "optimization" is used in the sense of making things worse for everybody. It is an attempt to promote websites - or perhaps product names - to bias search engine results and the answers provided by an LLM that has accessed the search engine, and summarized the sites it found, on behalf of a user. This comment from a Hacker News thread about ChatGPT Search gives a clue about the details of these shenanigans:

> Why would anyone ever publish stuff on the web for free unless it was just a hobby? So that ChatGPT mentions you, not your competitor, in the answer to the user. I have seen multiple SEO agencies already advertise that.

It's worth noting that one reason for the popularity of LLMs as a replacement for direct web searching is that they are currently sidestepping SEO manipulation. From another comment on that thread:

Third search (company name) got me an ENTIRE PAGE of ads and SEO optimized pages before the actual link to the actual product.

So this AEO thing is the latest development in the arms race between those seeking to enable product promoters and those trying to provide unbiased search results. (Within a large search company like Google or Bing, these may merely be different departments.) The objective is to defeat the use of LLMs to improve search results.
You apparently want a how-to guide. Such a guide would be hot property at the moment, I expect. That is to say, I would be surprised if anyone skilled in making money from the promotion of websites was willing to give away their secret methods for doing this in the very much hyped current context of LLMs, without also seeking money for sharing these putative secrets.
Edit: of course, that money could come from Google ads. Since manipulating search rankings is their business, it will presumably be easy to find the sites of those most competent at it, yet difficult to find the most useful guidance.  Card Zero  (talk) 16:57, 2 November 2024 (UTC)[reply]


November 4

[edit]

floating-point "accuracy"

[edit]

It's frequently stated that computer floating point is "inherently inaccurate", because of things like the way the C code

float f = 1. / 10.;
printf("%.15f\n", f);

tends to print 0.100000001490116.

Now, I know full well why it prints 0.100000001490116, since the decimal fraction 0.1 isn't representable in binary. That's not the question.

My question concerns those words "inherently inaccurate", which I've come to believe are, well, inaccurate. I believe that computer floating point is as accurate as the numbers you feed into it and the algorithms you use on them, and that it is also extremely precise (120 parts per billion for single precision, 220 parts per quintillion for double precision.) So I would say that floating point is not inaccurate, although it is indeed "inherently imprecise", although that's obviously no surprise, since its precision is inherently finite (24 bits for single precision, 53 bits for double, both assuming IEEE 754).

The other thing about binary floating point is that since it's done in base 2, its imprecisions show up differently than they would in base 10, which is what leads to the 0.100000001490116 anomaly I started this question with. (Me, I would say that those extra nonzero digits …1490116 are neither "inaccurate" nor "imprecise"; they're basically just false precision, since they're beyond the precision limit of float32.)

But I can see from our page on accuracy and precision that there are a number of subtly different definitions of these terms, so perhaps saying that floating point is "inherently inaccurate" isn't as wrong as I've been thinking.

So my question is just, what do other people think? Am I missing something? Is saying "floating point is inherently inaccurate" an informal but inaccurate approximation, or is it meaningful? —scs (talk) 13:50, 4 November 2024 (UTC)[reply]

Wiktionary: accurate says "accurate ... Telling the truth or giving a true result; exact", but float is not exact so it is not accurate in that sense. 213.126.69.28 (talk) 14:10, 4 November 2024 (UTC)[reply]
See also Floating-point arithmetic § Accuracy problems. What is not mentioned, is the problem that little inaccuracies can accumulate. For example, consider this code:
x = 0.1
for i in range(62):
  x = 4*x*(1-x)
The true mathematical value computed, rounded to 15 decimals, is 0.256412535470218, but the value computed using IEEE floating point arithmetic will come out as 0.988021660873313.  --Lambiam 16:06, 4 November 2024 (UTC)[reply]
@Lambiam: Cute example. (Does it have a name?) Lately I've been impressed at how often cascading precision loss isn't a problem, although it's certainly one of the things you always have to watch out for, as here. (It's why I said "as accurate as... the algorithms you use on them".) —scs (talk) 14:28, 5 November 2024 (UTC)[reply]
See Logistic map § Solution when r = 4.  --Lambiam 20:01, 5 November 2024 (UTC)[reply]
@Lambiam: Aha: An "archetypal example of complex, chaotic behaviour". So we shouldn't be too surprised it's particularly sensitive to slight computational differences along the way... :-) —scs (talk) 22:48, 5 November 2024 (UTC)[reply]
The basic floating point operations are as accurate as it possible for them to be and the specification normally talks about precision which measures the unavoidable deviation from being completely accurate. But no-one is going to carp about calling it inaccurate! NadVolum (talk) 16:57, 4 November 2024 (UTC)[reply]
@NadVolum: I am here to prove you wrong, because that is exactly what I am carping about! :-) —scs (talk) 17:31, 4 November 2024 (UTC)[reply]
There are two issues with floating point numbers that are both wrapped up in "they are not accurate." I personally never say that. What I say is that the same number can be stored in memory different ways. For example, a human can tell you that 10e4 and 100e3 are the same number. But, to a computer, they are different. It doesn't parse out the value and computer 100000 and 100000. It compares exactly what it sees. 10e4 and 100e3 are not the same. Of course, computers use binary, not decimal, but that isn't the point. The point is that you have the same value being stored in different ways. You, as the human, don't control it. As you do operations on a value in memory, the value updates and the exact way it is stored can change. Separately, floating point numbers do tend to drift at the very end. So, 3.000000... can become 2.99999999... or 3.00000000....0001. That is not "wildy" inaccurate. But, 2.99999... is not the same value as 3.00000. In the end, why do we care? It comes down to programming. If you have two floating point variables x and y and you want to know if they are the same value, you can't simply compare x==y and hope to get the right answer. What if x is 3.00000.... and y is 2.99999...? Instead, you do something like abs(x-y)<0.000000001. Then, if there is a little drift or if the two numbers are the same value but stored slightly different, you get the correct answer. This came to a head way back in the 80s when there was a flame war about making the early c++ compiler automatically convert x==y to abs(x-y)<0.0000000000000000001. But, what I believe you are arguing is that memory storage should be fixed instead of the programming so the numbers are always stored in the exact same format and there is never ever any drift of any kind. That would be more difficult in my opinion. 17:24, 5 November 2024 (UTC)
That's not what I was saying, but thanks for your reply. [P.S. 10e4 and 100e3 are the same number, in any normalized floating-point format; they're both stored as 1.52587890625 × 216, or more to the point, in binary as 1.100001101012 × 216.] [P.P.S. Testing fabs(x - y) < some_small_number is not a very good way of doing it, but that wasn't the question, either.] —scs (talk) 20:12, 5 November 2024 (UTC)[reply]
A fundamental problem is that numbers represented in numerical form, whether on paper or in a computer, are rational numbers. In print, we typically have numbers of the form where and are whole numbers. In computers, is more common. However, most numbers are not rational. There is no way to compute the exact value of, for example, There is no known algorithm that will decide in general whether the mathematical value of such an expression with transcendental functions is itself transcendental, so at a branch asking whether this value is equal to we have no better recourse than computing this with limited accuracy and making a decision that may be incorrect.
BTW, "comparison tolerance" was a feature of APL.[1] The "fuzz", as it was colloquially called, was not a fixed constant but was a system variable with the strange name ⎕ct to which a user could assign a value. The comparison was more complicated than just the absolute difference; it was relative to the larger absolute value of the two comparands (if not exactly equal as rational numbers).  --Lambiam 21:37, 5 November 2024 (UTC)[reply]
I actually don't agree with the claim that numbers represented in numerical form...are rational numbers. If you're talking about the main use for them, namely representing physical quantities, they aren't rational numbers, not conceptually anyway. Conceptually they're "fuzzy real numbers". They don't represent any exact value, rational or otherwise, but rather a position along the real line known with some uncertainty. --Trovatore (talk) 22:36, 5 November 2024 (UTC)[reply]
(Taking the above comments as read:) Most of the significant issues with floating point numbers are programming errors, often slightly subtle ones. It is possible in the majority of cases to use rational numbers as an alternative, only producing a floating point representation when display or output is needed in that form. Again for the majority of cases this would be a good solution, but for a very few cases the numerator and denominator could be very large (the logistic map example above would require ~ 2^62 digits (cancelling helps but a little)), and for compute intensive cases the general slowdown could be important. All the best: Rich Farmbrough 12:00, 6 November 2024 (UTC).[reply]
It's conceptually wrong, though, in most cases. Floating-point numbers usually represent physical quantities, and physical quantities aren't conceptually rational numbers. What we want is something that approximates our state of knowledge about a real-valued quantity, and floating point is the closest thing we have to that in wide use. (Interval arithmetic would be a *little* closer but it's a pain.)
That doesn't actually prove that you couldn't get good solutions with rationals, but it's kind of an article of software-engineering faith that things work best when your data structures align with your concepts. I don't know if that's ever been put to a controlled test. --Trovatore (talk) 18:25, 6 November 2024 (UTC)[reply]
Sure you are absolutely right for representing physical quantities in most cases - in chaotic scenarios whatever accuracy you measure with might not be enough regardless of the way you calculate. However computing is used for many purposes including mathematics. It's also used in ways where careful application of floating point will bring an acceptable answer, but naive application won't. All the best: Rich Farmbrough 23:03, 6 November 2024 (UTC).[reply]
Incidentally here's a perl program that gets a more accurate answer to the logistic map problem above, using floating point:
use Math::BigFloat;
my $x = Math::BigFloat->new('0.1');
$x->accuracy(50);  # Set desired precision
for (my $i = 0; $i < 62; $i++) {
    $x = 4 * $x * (1 - $x);
}
print "$x\n";
All the best: Rich Farmbrough 23:07, 6 November 2024 (UTC).[reply]
I guess you meant "...using arbitrary precision floating point" (i.e. perl's "BigFloat" package).
But this ends up being a nice illustration of another principle of numerical programming, namely the importance of using excess precision for intermediate results. Evidently that call to accuracy(50) sets not only the final printout precision, but also the maximum precision carried through the calculations. So although it prints 50 digits, only 32 of them are correct, with the rest lost due to accumulated roundoff error along the way. (The perl program prints 0.25641253547021802388934423187010674334774769183115, but I believe the correct answer to 50 digits — at least, according to my own, homebrew multiprecision calculator — is 0.25641253547021802388934423187010494728798714960746.) —scs (talk) 04:02, 7 November 2024 (UTC), edited 13:45, 8 November 2024 (UTC)[reply]
My original question was about how to describe the imperfections, not what the imperfections are or where they come from. But since someone brought up rational numbers, my own take is that there are several models you might imagine using for how computer floating point works:
Now, although real numbers are arguably what floating-point numbers are the farthest from — there are an uncountably infinite number of real numbers, but "only" 18,437,736,874,454,810,626 floating-point ones — it's the real numbers that floating-point at least tries to approximate. The approximation is supremely imperfect — both the range and the precision are strictly limited — but if you imagine that floating-point numbers are approximate, limited-precision renditions of certain real numbers, you won't go too far wrong. (As for the rationals, it's not strictly wrong to say that "floating point numbers are rational numbers", because the floating point numbers are indeed a proper subset of the rational numbers — but I don't think it's a useful model.) —scs (talk) 13:35, 8 November 2024 (UTC)[reply]
Actually, it is "strictly wrong to say that 'floating point numbers are rational numbers'". At least there is no injective ring homomorphism from the floats into the rationals, because the arithmetic is different. Of course the floats aren't literally a ring in the first place, but you can work out what I mean. --Trovatore (talk) 19:35, 8 November 2024 (UTC)[reply]

November 5

[edit]

Monitor Is Dark

[edit]

I have a Dell desktop computer running Windows 11 with a full-screen monitor. Early this afternoon, it was displaying the screen to prompt me to enter my passnumber, which I entered, and then the screen went dark. The computer itself is still functioning. I have shared some of its folders, and I can see them as shared drives on my laptop computer. My question is what I should try short of replacing the monitor. I haven't priced monitors yet, but I know that they cost between $100 and $200, and I am willing to spend that if necessary, but would of course rather spend on something else. I tried unplugging the monitor from the UPS and plugging it back in. Is there anything that can be inferred from the fact that the monitor turned off while it was logging me on? Is there anything in particular that I should try? Robert McClenon (talk) 00:42, 5 November 2024 (UTC)[reply]

Please disregard this question. I disconnected the monitor power cord from both the monitor and the power supply. Then I plugged it back into a different socket of the power supply, and back into the monitor, and the display is fine again. I don't know whether a connection had been loose or whether the socket in the power supply failed, more likely the former, but I will just leave it alone now that it is working. Robert McClenon (talk) 01:14, 5 November 2024 (UTC)[reply]
I am running Windows 11 and the same thing happened to me a few hours ago. I have three monitors. All went black. The computer was still on and running, but no display except for the mouse. It took me a bit to realize that as I moved the mouse, a gray pixel moved around the screens. I forced a shutdown on the computer by long-holding the power button, turned it back on, and all three monitors started working again. 12.116.29.106 (talk) 17:15, 5 November 2024 (UTC)[reply]
That was a different problem. That sounds like a failure in Windows 11. My problem turned out to be a hardware problem. I am satisfied that I solved my problem and that you solved yours. Robert McClenon (talk) 04:01, 6 November 2024 (UTC)[reply]

November 6

[edit]

Turning Off Ad Blocker

[edit]

Sometimes when I am viewing a news web site, there is a message asking me to turn off my ad blocker. I have not deliberately enabled an ad blocker, so I assume that something, maybe Norton, is blocking ads. If I am using Firefox, how do I determine what ad blocker is in use, so that I can turn it off if I want to view a page that doesn't like ad blockers? If I am using Chrome, how do I determine what ad blocker is in use, so that I can turn the ad blocker off? I have found that if I really want to bypass the ad blocker, I can use Opera, which is a less commonly used web browser, so that common security software doesn't mess with it, but I would like to be able to turn off the ad blocker if the web site tells me to turn off the ad blocker.

This is sort of an electronic arms race, with electronic counter-measures, and electronic counter-counter-measures. Robert McClenon (talk) 04:11, 6 November 2024 (UTC)[reply]

@Robert McClenon: I believe it potentially could be the tracker blocking from Firefox itself. I'm not sure whether there's an easy way to see what's blocking the adverts as it could potentially be down at network level. I suspect it's Firefox blocking trackers as occasionally when I use a browser that blocks trackers, I do get ad blocker disable notices. Zippybonzo | talk | contribs (they/them) 13:14, 6 November 2024 (UTC)[reply]
Robert McClenon: using Firefox, I had a similar problem with YouTube, and learned that not only Adguard Adblocker and uBlock origin needed to be turned off for YouTube to work, but that Malwarebytes had also acquired an ad-blocking aspect and also needed to be turned off.
On Firefox, you may be able to click a jigsaw-piece icon at top right, labelled 'Extensions' and see what you currently have turned on and off. Hope this helps. {The poster formerly known as 87.81.230.195} 94.6.86.81 (talk) 21:52, 6 November 2024 (UTC)[reply]

Intermittent but predictable IP connectivity

[edit]

Context: I had an interesting issue, which I would like to know the technical cause of, partly out of curiosity, and partly so that I can have a more elegant fix should it recur. I resolved the issue by restarting my laptop (restarting the router didn't work and other devices did not have the problem).

My laptop had been working fine for a week or so on a new fibre connection, using the same router that we have had for several years. I went out and used my phone as a hotspot for my laptop. Came home, with hotspot turned off the to discover very intermittent Internet access.

The lap top was connected for 3 minutes, disconnected for 1 minute. I ran ping -t from command line to the gateway and logged the results. Ping -t should run once per second. I got between 177 and 179 successful pings, followed by 60-63 unsuccessful pings. I believe the slight variance from 180/60 was due to the reset happening in a lower level of the stack, so losing a little time while higher level connections were established (of course I'd expect the counts to vary by 1 or 2 simply because of the coarse resolution).

Hypotheses welcome, they should explain the 3 minute and 1 minute time spans.

Note: I found a Reddit post where someone had connectivity in "2-3 minute" chunks , but the answers weren't particularly informative.

All the best: Rich Farmbrough 11:45, 6 November 2024 (UTC).[reply]


November 9

[edit]