Wall (6,341 threads)
Before asking a question, make sure to read the FAQ.
We aim to maintain a healthy atmosphere for civilized discussions. Please read our rules against bad behavior.
6 minutes ago
11 hours ago
2 days ago
3 days ago
3 days ago
3 days ago
3 days ago
4 days ago
5 days ago
5 days ago
A proposal for the Japanese romanisation:
I think both romaji and kana readings should be shown on the site. While there's some agreement that serious students of the language should be reading kana, having the romaji makes the site more accessible for silly things like learning to say "I love you" in twenty languages.
As for how to generate it, I'd suggest using the B lines where possible. If there's no B line, or if the B line does not match the text (for instance because of names in the sentence), generate the reading with MeCab, which looks pretty solid.
This may require names and other unindexed items to be added to a B line if the romanisation needs to be corrected. Just the reading will do.
Does this sound reasonable?
I'm motivated to work on this if necessary, but it will probably be a little while before I have the time. First up would be to create the B line to reading converter, and then use it to test MeCab's accuracy on our data.
There are entries like '|1' in front of most parentheses that aren't described at http://www.edrdg.org/wiki/index.php/Tanaka_Corpus. I'm guessing they're indices for the reading?
Much as I love those indices in the B-lines (I invented them years ago), I think it might be better to go straight to MeCab. There are some tricks you would need to apply, e.g. where MeCab says a particle is "助詞,格助詞" you would leave it with spaces around it, and where it is "助詞,接続助詞" you would attach it to the preceding word.
Those "|1" are an artifact of the days when Paul Blay was maintaining the indices in MSAccess and needed a way of disambiguating some words. They are not carried through to the B lines in WWWJDIC (I didn't know what they were until Trang expained them to me.)
If you are using MeCab, use the IPADIC version.
The B-lines would be necessary if you you were to create links to WWWJDIC, as MeCab breaks up expressions/compound nouns/etc.
one of us has beginning to look into changing kakasi, but we have never used MeCab and so, so maybe if you want, I can give you his mail in order to see how to use/configure mecab ?
Sure. I have only used it on as a command-line too (and in shell scripts), but I see it has bindings for python, perl, ruby and java. I just installed it with apt-get (Debian). I use the ipadic (mecab-ipadic) ratjer than the default juman dictionary.
You need to make sure you get the utf-8 configuration. Mine is euc-jp.
It's simple to use: "echo 日本語の分節 | mecab".
Feel free to ask.
the output seems to contain only katakana, is there a way to have hirigana ?
Only by doing your own conversion - those morphological analysis systems don't really care whether it's one or the other.
In EUC-JP and in raw Unicode the conversion is simple, e.g. あ is 3041 and ア is 30A1 and so on. It's a little more messy in UTF8 but quite doable with a simple algorithm. Of course where it is katakana in the original text, you would leave it that way.
ok that's what we've done waiting your answer, so we will keep it :)
so it's highly probable that it will be included in next release
Traditional and Simplified Chinese
I saw the comment about converting hanzi on-the-fly. Be very cautious about that, as there are many cases where it simply doesn't work. Proper Traditional<->Simplified conversion needs to work at the lexeme level and in some cases needs some context for disambiguation.
Jack Halpern wrote a very good paper about this about 10 years ago:
PS: how do I make a comment on another posting?
OK, I worked out how to do a follow-on. I'd clicked "reply" but it hadn't worked. Now it does.
the traditional to simplified chinese is not made at "character by character" level, but try to decompose the sentence (you can see how the sentence has been segmented by looking to pinyin)
As I've said I'm in conctact with the guy who develop it, so don't hesitate to report any bad segmentations, I will report to him
WWWJDIC index line.
I suggest adding links from words in the Japanese sentence to WWWJDIC entries using the information in the index line. That would be a useful 'first step' towards adding furigana to the sentence.
The basic set-up is relatively straight forward, but there is one complication - namely 'deliberately non-indexed text'. Punctuation, English words, place names and other proper nouns are not generally included in EDICT and so do not have entries in the Index line. Jim Breen should have a 'no index' field that includes all non-indexed text (although it may not be up to date). In order to parse a sentence properly you need both the index line and the non-indexed text.
Adding furigana to place names etc. should probably be left for later.
Word-by-word links based on the Japanese index words would be good, and not too hard to implement, I think.
At present I am pulling the sentences and indices into WWWJDIC once a week, and I put them through a utility which matches the text and the index contents, and reports if there is a mismatch (which usually means that someone has changed a sentence.) To get around the problem of "deliberately non-indexed text" I have a file of
words which I ignore if they are not in the index. You can see this list of words at http://www.csse.monash.edu.au/~...amplestopwords (in EUC-JP). Most are names. Some look a bit odd as they are two or more names which had been separated by punctuation (which I ignore.)
There are now 100 translation suggestions waiting to be checked at
I would urge people who understand Japanese to check them and either confirm or correct them.
In order to better determine possible translations for
it would help if I could view the source code.
Also, some translation items require code reworking as well. e.g. "linked to" should probably be "linked to » %s" (where %s is the sentence number) so that the Japanese could be something like "%s とつながる".
Normally the path of the file should be enough of a hint for you to figure out on which page of the website the string can be found.
If the path is something like /views/<something>/file.ctp file, then you would usually (not always) need to go to http://tatoeba.org/<something>/file
If the path is /controllers/<something>_controller.php, then the string is a bit harder to find, but it can be found somewhere in the pages that start with http://tatoeba.org/<something>/
I don't know how comfortable you are looking at source code, but it could be simpler if you just translated what you can first. We have a "test" version of Tatoeba where we test things before we update the "real" version of Tatoeba. As soon as you have your translations done (even partially), we can update the "test" version and you can then browse around in there to check if the translations fit or not. I'll give you the link in a private message.
Other than that, the source code can be found here:
Just note that the strings in Launchpad are not always exactly synchronized with the code source.
> I don't know how comfortable you are looking at source code
Without looking at the code it's very difficult to correctly translate things like
<b>Share</b> your knowledge.
because they are handled as _two strings_ and the order needs to be reversed in Japanese.
Ah yes, forgot to mention, like you noted, there are some strings that we forgot to make more "compliant" for internationalization.
You can send me an email to list those you find. I'll fix it in the code and update the strings in Launchpad.
Question, which places more strain on the server: generating sentences using a keyword query or using the random sentence generator?
random sentences for sure, mysql doesn't like random at all ^^ we're on the way to try to make it faster
Would be really cool if we could add audio someday to the example sentences ;-)
We plan to do so, you will have more details and maybe a proof of concet at the beginning of April :)
Looking forward to that!
Did you change something with the database dump? This Saturday's jpn_indices contain invalid utf8 characters and the affected lines seem to be truncated.
The following sentence ids have problems: 83767, 91272, 140460, 146080, 152054, 190707, 195118, 199753, 205628, 211131, 213530, 235850
Ah, indeed, indeed. I had changed the 'text' field from varchar to varbinary, but kept the length to 500. That's why those entries were truncated. I've fixed it and did a new export of the jpn_indices.
> This Saturday's jpn_indices [...]
How do you know about that by the way? I don't remember making it official yet, that the download files would be upadted on Saturdays. (or did I? o.o)
1000+ sentences in arabic.
I'd like 2 thank everyone that has ever thanked everyone. On behalf of all of us you've thanked I say thank u for thanking us.
lol dane cook is brilliant :D
:D thank you^^.
I believe in ghosts. I believe in aliens. But theres no way u will ever persuade me into believing in alien ghosts. Ridiculous.
I believe in the sentence method. I believe in language websites. But theres no way u will ever persuade me into believing in sentence websites. Ridiculous
yay! first tatoeba joke :P (hmm I wonder if I can consider this a wall abuse..)
omg you're so funny, stop "abusing" the wall :D
I should just mention I never said that :P
But I do think it. Well, especially the "abusing the wall" part, because now I'm working on figuring out how to paginate this wall. Certainly there will be more abuse.