2017-10-14 18:36:53 -07:00
|
|
|
# frozen_string_literal: true
|
2017-10-09 15:28:28 -07:00
|
|
|
# == Schema Information
|
|
|
|
#
|
2017-10-21 12:47:17 -07:00
|
|
|
# Table name: glitch_keyword_mutes
|
2017-10-09 15:28:28 -07:00
|
|
|
#
|
|
|
|
# id :integer not null, primary key
|
|
|
|
# account_id :integer not null
|
|
|
|
# keyword :string not null
|
Allow keywords to match either substrings or whole words.
Word-boundary matching only works as intended in English and languages
that use similar word-breaking characters; it doesn't work so well in
(say) Japanese, Chinese, or Thai. It's unacceptable to have a feature
that doesn't work as intended for some languages. (Moreso especially
considering that it's likely that the largest contingent on the Mastodon
bit of the fediverse speaks Japanese.)
There are rules specified in Unicode TR29[1] for word-breaking across
all languages supported by Unicode, but the rules deliberately do not
cover all cases. In fact, TR29 states
For example, reliable detection of word boundaries in languages such
as Thai, Lao, Chinese, or Japanese requires the use of dictionary
lookup, analogous to English hyphenation.
So we aren't going to be able to make word detection work with regexes
within Mastodon (or glitchsoc). However, for a first pass (even if it's
kind of punting) we can allow the user to choose whether they want word
or substring detection and warn about the limitations of this
implementation in, say, docs.
[1]: https://unicode.org/reports/tr29/
https://web.archive.org/web/20171001005125/https://unicode.org/reports/tr29/
2017-10-15 17:49:22 -07:00
|
|
|
# whole_word :boolean default(TRUE), not null
|
2017-10-09 15:28:28 -07:00
|
|
|
# created_at :datetime not null
|
|
|
|
# updated_at :datetime not null
|
|
|
|
#
|
|
|
|
|
2017-10-21 12:47:17 -07:00
|
|
|
class Glitch::KeywordMute < ApplicationRecord
|
2017-10-14 18:36:53 -07:00
|
|
|
belongs_to :account, required: true
|
|
|
|
|
|
|
|
validates_presence_of :keyword
|
|
|
|
|
2017-10-15 00:52:53 -07:00
|
|
|
after_commit :invalidate_cached_matcher
|
|
|
|
|
|
|
|
def self.matcher_for(account_id)
|
|
|
|
Rails.cache.fetch("keyword_mutes:matcher:#{account_id}") { Matcher.new(account_id) }
|
|
|
|
end
|
|
|
|
|
|
|
|
private
|
|
|
|
|
|
|
|
def invalidate_cached_matcher
|
|
|
|
Rails.cache.delete("keyword_mutes:matcher:#{account_id}")
|
2017-10-14 18:36:53 -07:00
|
|
|
end
|
|
|
|
|
|
|
|
class Matcher
|
|
|
|
attr_reader :regex
|
|
|
|
|
2017-10-15 00:52:53 -07:00
|
|
|
def initialize(account_id)
|
Use more idiomatic string concatentation. #164.
The intent of the previous concatenation was to minimize object
allocations, which can end up being a slow killer. However, it turns
out that under MRI 2.4.x, the shove-strings-in-an-array-and-join method
is not only arguably more common but (in this particular case) actually
allocates *fewer* objects than the string concatenation.
Or, at least, that's what I gather by running this:
words = %w(palmettoes nudged hibernation bullish stockade's tightened Hades
Dixie's formalize superego's commissaries Zappa's viceroy's apothecaries
tablespoonful's barons Chennai tollgate ticked expands)
a = Account.first
KeywordMute.transaction do
words.each { |w| KeywordMute.create!(keyword: w, account: a) }
GC.start
s1 = GC.stat
re = String.new.tap do |str|
scoped = KeywordMute.where(account: a)
keywords = scoped.select(:id, :keyword)
count = scoped.count
keywords.find_each.with_index do |kw, index|
str << Regexp.escape(kw.keyword.strip)
str << '|' if index < count - 1
end
end
s2 = GC.stat
puts s1.inspect, s2.inspect
raise ActiveRecord::Rollback
end
vs this:
words = %w( palmettoes nudged hibernation bullish stockade's tightened Hades Dixie's
formalize superego's commissaries Zappa's viceroy's apothecaries tablespoonful's
barons Chennai tollgate ticked expands
)
a = Account.first
KeywordMute.transaction do
words.each { |w| KeywordMute.create!(keyword: w, account: a) }
GC.start
s1 = GC.stat
re = [].tap do |arr|
KeywordMute.where(account: a).select(:keyword, :id).find_each do |m|
arr << Regexp.escape(m.keyword.strip)
end
end.join('|')
s2 = GC.stat
puts s1.inspect, s2.inspect
raise ActiveRecord::Rollback
end
Using rails r, here is a comparison of the total_allocated_objects and
malloc_increase_bytes GC stat data:
total_allocated_objects malloc_increase_bytes
string concat 3200241 -> 3201428 (+1187) 1176 -> 45216 (44040)
array join 3200380 -> 3201299 (+919) 1176 -> 36448 (35272)
2017-10-15 00:32:03 -07:00
|
|
|
re = [].tap do |arr|
|
2017-10-21 12:47:17 -07:00
|
|
|
Glitch::KeywordMute.where(account_id: account_id).select(:keyword, :id, :whole_word).find_each do |m|
|
Allow keywords to match either substrings or whole words.
Word-boundary matching only works as intended in English and languages
that use similar word-breaking characters; it doesn't work so well in
(say) Japanese, Chinese, or Thai. It's unacceptable to have a feature
that doesn't work as intended for some languages. (Moreso especially
considering that it's likely that the largest contingent on the Mastodon
bit of the fediverse speaks Japanese.)
There are rules specified in Unicode TR29[1] for word-breaking across
all languages supported by Unicode, but the rules deliberately do not
cover all cases. In fact, TR29 states
For example, reliable detection of word boundaries in languages such
as Thai, Lao, Chinese, or Japanese requires the use of dictionary
lookup, analogous to English hyphenation.
So we aren't going to be able to make word detection work with regexes
within Mastodon (or glitchsoc). However, for a first pass (even if it's
kind of punting) we can allow the user to choose whether they want word
or substring detection and warn about the limitations of this
implementation in, say, docs.
[1]: https://unicode.org/reports/tr29/
https://web.archive.org/web/20171001005125/https://unicode.org/reports/tr29/
2017-10-15 17:49:22 -07:00
|
|
|
boundary = m.whole_word ? '\b' : ''
|
|
|
|
arr << "#{boundary}#{Regexp.escape(m.keyword.strip)}#{boundary}"
|
2017-10-14 18:36:53 -07:00
|
|
|
end
|
Use more idiomatic string concatentation. #164.
The intent of the previous concatenation was to minimize object
allocations, which can end up being a slow killer. However, it turns
out that under MRI 2.4.x, the shove-strings-in-an-array-and-join method
is not only arguably more common but (in this particular case) actually
allocates *fewer* objects than the string concatenation.
Or, at least, that's what I gather by running this:
words = %w(palmettoes nudged hibernation bullish stockade's tightened Hades
Dixie's formalize superego's commissaries Zappa's viceroy's apothecaries
tablespoonful's barons Chennai tollgate ticked expands)
a = Account.first
KeywordMute.transaction do
words.each { |w| KeywordMute.create!(keyword: w, account: a) }
GC.start
s1 = GC.stat
re = String.new.tap do |str|
scoped = KeywordMute.where(account: a)
keywords = scoped.select(:id, :keyword)
count = scoped.count
keywords.find_each.with_index do |kw, index|
str << Regexp.escape(kw.keyword.strip)
str << '|' if index < count - 1
end
end
s2 = GC.stat
puts s1.inspect, s2.inspect
raise ActiveRecord::Rollback
end
vs this:
words = %w( palmettoes nudged hibernation bullish stockade's tightened Hades Dixie's
formalize superego's commissaries Zappa's viceroy's apothecaries tablespoonful's
barons Chennai tollgate ticked expands
)
a = Account.first
KeywordMute.transaction do
words.each { |w| KeywordMute.create!(keyword: w, account: a) }
GC.start
s1 = GC.stat
re = [].tap do |arr|
KeywordMute.where(account: a).select(:keyword, :id).find_each do |m|
arr << Regexp.escape(m.keyword.strip)
end
end.join('|')
s2 = GC.stat
puts s1.inspect, s2.inspect
raise ActiveRecord::Rollback
end
Using rails r, here is a comparison of the total_allocated_objects and
malloc_increase_bytes GC stat data:
total_allocated_objects malloc_increase_bytes
string concat 3200241 -> 3201428 (+1187) 1176 -> 45216 (44040)
array join 3200380 -> 3201299 (+919) 1176 -> 36448 (35272)
2017-10-15 00:32:03 -07:00
|
|
|
end.join('|')
|
2017-10-14 18:36:53 -07:00
|
|
|
|
Allow keywords to match either substrings or whole words.
Word-boundary matching only works as intended in English and languages
that use similar word-breaking characters; it doesn't work so well in
(say) Japanese, Chinese, or Thai. It's unacceptable to have a feature
that doesn't work as intended for some languages. (Moreso especially
considering that it's likely that the largest contingent on the Mastodon
bit of the fediverse speaks Japanese.)
There are rules specified in Unicode TR29[1] for word-breaking across
all languages supported by Unicode, but the rules deliberately do not
cover all cases. In fact, TR29 states
For example, reliable detection of word boundaries in languages such
as Thai, Lao, Chinese, or Japanese requires the use of dictionary
lookup, analogous to English hyphenation.
So we aren't going to be able to make word detection work with regexes
within Mastodon (or glitchsoc). However, for a first pass (even if it's
kind of punting) we can allow the user to choose whether they want word
or substring detection and warn about the limitations of this
implementation in, say, docs.
[1]: https://unicode.org/reports/tr29/
https://web.archive.org/web/20171001005125/https://unicode.org/reports/tr29/
2017-10-15 17:49:22 -07:00
|
|
|
@regex = /#{re}/i unless re.empty?
|
2017-10-14 18:36:53 -07:00
|
|
|
end
|
|
|
|
|
|
|
|
def =~(str)
|
2017-10-14 18:45:14 -07:00
|
|
|
regex ? regex =~ str : false
|
2017-10-14 18:36:53 -07:00
|
|
|
end
|
2017-10-14 00:28:20 -07:00
|
|
|
end
|
2017-10-09 15:28:28 -07:00
|
|
|
end
|