推荐 最新
笑面猫

在MongoDB中,简单场景是否适宜使用事务?

简单场景下使用mongoDB事务是否合适? 有一个项目使用的是mongoDB作为存储数据库,其中用户充值购买VIP功能(逻辑很简单)打算使用mongoDB中的事务来实现,看官网文档的介绍说使用事务的性能并不好。那这样说在mongoDB中应该避免使用事务吗?(我是mongoDB新手,刚使用这种数据库)

18
1
0
浏览量275
谁能阻止我删代码

mongodb实现模板的计算字段,如何实现,有什么好的思路希望得到指点?

模板配置中,需要添加一个计算字段,数据库是mongodb,不知道有什么好的设计方案 计算字段 = $字段1$ * $字段2$ + 233 如何将这个表达式转换成mongodb的语法,或者使用其他方式处理呢(其实里面还可能存在变量时间,函数等先不做讨论)。 目前的思路和想法是,将模板字段保存后加到配置表,每次查询时将字段使用addFields查询出来

14
1
0
浏览量341
抠香糖

mongodb会自己关闭6.0和最新版7.0.1都会这样,下面是日志,为什么会自己关闭了呢?

{"t":{"$date":"2023-09-20T00:18:43.271+08:00"},"s":"I", "c":"CONTROL", "id":23285, "ctx":"thread1","msg":"Automatically disabling TLS 1.0, to force-enable TLS 1.0 specify --sslDisabledProtocols 'none'"} {"t":{"$date":"2023-09-20T00:18:43.274+08:00"},"s":"I", "c":"NETWORK", "id":4915701, "ctx":"thread1","msg":"Initialized wire specification","attr":{"spec":{"incomingExternalClient":{"minWireVersion":0,"maxWireVersion":21},"incomingInternalClient":{"minWireVersion":0,"maxWireVersion":21},"outgoing":{"minWireVersion":6,"maxWireVersion":21},"isInternalClient":true}}} {"t":{"$date":"2023-09-20T00:18:46.473+08:00"},"s":"I", "c":"NETWORK", "id":4648602, "ctx":"thread1","msg":"Implicit TCP FastOpen in use."} {"t":{"$date":"2023-09-20T00:18:46.478+08:00"},"s":"I", "c":"REPL", "id":5123008, "ctx":"thread1","msg":"Successfully registered PrimaryOnlyService","attr":{"service":"TenantMigrationDonorService","namespace":"config.tenantMigrationDonors"}} {"t":{"$date":"2023-09-20T00:18:46.480+08:00"},"s":"I", "c":"REPL", "id":5123008, "ctx":"thread1","msg":"Successfully registered PrimaryOnlyService","attr":{"service":"TenantMigrationRecipientService","namespace":"config.tenantMigrationRecipients"}} {"t":{"$date":"2023-09-20T00:18:46.480+08:00"},"s":"I", "c":"CONTROL", "id":5945603, "ctx":"thread1","msg":"Multi threading initialized"} {"t":{"$date":"2023-09-20T00:18:46.480+08:00"},"s":"I", "c":"TENANT_M", "id":7091600, "ctx":"thread1","msg":"Starting TenantMigrationAccessBlockerRegistry"} {"t":{"$date":"2023-09-20T00:18:46.482+08:00"},"s":"I", "c":"CONTROL", "id":4615611, "ctx":"initandlisten","msg":"MongoDB starting","attr":{"pid":10596,"port":27017,"dbPath":"./data","architecture":"64-bit","host":"flncetw3obal79ea"}} {"t":{"$date":"2023-09-20T00:18:46.482+08:00"},"s":"I", "c":"CONTROL", "id":23398, "ctx":"initandlisten","msg":"Target operating system minimum version","attr":{"targetMinOS":"Windows 7/Windows Server 2008 R2"}} {"t":{"$date":"2023-09-20T00:18:46.482+08:00"},"s":"I", "c":"CONTROL", "id":23403, "ctx":"initandlisten","msg":"Build Info","attr":{"buildInfo":{"version":"7.0.1","gitVersion":"425a0454d12f2664f9e31002bbe4a386a25345b5","modules":[],"allocator":"tcmalloc","environment":{"distmod":"windows","distarch":"x86_64","target_arch":"x86_64"}}}} {"t":{"$date":"2023-09-20T00:18:46.483+08:00"},"s":"I", "c":"CONTROL", "id":51765, "ctx":"initandlisten","msg":"Operating System","attr":{"os":{"name":"Microsoft Windows Server 2019","version":"10.0 (build 17763)"}}} {"t":{"$date":"2023-09-20T00:18:46.483+08:00"},"s":"I", "c":"CONTROL", "id":21951, "ctx":"initandlisten","msg":"Options set by command line","attr":{"options":{"config":"mongo.conf","net":{"bindIp":"127.0.0.1","port":27017},"security":{"authorization":"disabled"},"storage":{"dbPath":"./data"},"systemLog":{"destination":"file","path":"./log/mongo.log","quiet":true}}}} {"t":{"$date":"2023-09-20T00:18:46.488+08:00"},"s":"I", "c":"STORAGE", "id":22270, "ctx":"initandlisten","msg":"Storage engine to use detected by data files","attr":{"dbpath":"./data","storageEngine":"wiredTiger"}} {"t":{"$date":"2023-09-20T00:18:46.490+08:00"},"s":"I", "c":"STORAGE", "id":22315, "ctx":"initandlisten","msg":"Opening WiredTiger","attr":{"config":"create,cache_size=15871M,session_max=33000,eviction=(threads_min=4,threads_max=4),config_base=false,statistics=(fast),log=(enabled=true,remove=true,path=journal,compressor=snappy),builtin_extension_config=(zstd=(compression_level=6)),file_manager=(close_idle_time=600,close_scan_interval=10,close_handle_minimum=2000),statistics_log=(wait=0),json_output=(error,message),verbose=[recovery_progress:1,checkpoint_progress:1,compact_progress:1,backup:0,checkpoint:0,compact:0,evict:0,history_store:0,recovery:0,rts:0,salvage:0,tiered:0,timestamp:0,transaction:0,verify:0,log:0],"}} {"t":{"$date":"2023-09-20T00:18:47.354+08:00"},"s":"I", "c":"STORAGE", "id":4795906, "ctx":"initandlisten","msg":"WiredTiger opened","attr":{"durationMillis":864}} {"t":{"$date":"2023-09-20T00:18:47.354+08:00"},"s":"I", "c":"RECOVERY", "id":23987, "ctx":"initandlisten","msg":"WiredTiger recoveryTimestamp","attr":{"recoveryTimestamp":{"$timestamp":{"t":0,"i":0}}}} {"t":{"$date":"2023-09-20T00:18:47.376+08:00"},"s":"W", "c":"CONTROL", "id":22192, "ctx":"initandlisten","msg":"You are running on a NUMA machine. We suggest disabling NUMA in the machine BIOS by enabling interleaving to avoid performance problems. See your BIOS documentation for more information","tags":["startupWarnings"]} {"t":{"$date":"2023-09-20T00:18:47.383+08:00"},"s":"I", "c":"NETWORK", "id":4915702, "ctx":"initandlisten","msg":"Updated wire specification","attr":{"oldSpec":{"incomingExternalClient":{"minWireVersion":0,"maxWireVersion":21},"incomingInternalClient":{"minWireVersion":0,"maxWireVersion":21},"outgoing":{"minWireVersion":6,"maxWireVersion":21},"isInternalClient":true},"newSpec":{"incomingExternalClient":{"minWireVersion":0,"maxWireVersion":21},"incomingInternalClient":{"minWireVersion":21,"maxWireVersion":21},"outgoing":{"minWireVersion":21,"maxWireVersion":21},"isInternalClient":true}}} {"t":{"$date":"2023-09-20T00:18:47.383+08:00"},"s":"I", "c":"REPL", "id":5853300, "ctx":"initandlisten","msg":"current featureCompatibilityVersion value","attr":{"featureCompatibilityVersion":"7.0","context":"startup"}} {"t":{"$date":"2023-09-20T00:18:47.384+08:00"},"s":"I", "c":"STORAGE", "id":5071100, "ctx":"initandlisten","msg":"Clearing temp directory"} {"t":{"$date":"2023-09-20T00:18:47.395+08:00"},"s":"I", "c":"CONTROL", "id":6608200, "ctx":"initandlisten","msg":"Initializing cluster server parameters from disk"} {"t":{"$date":"2023-09-20T00:18:47.395+08:00"},"s":"I", "c":"CONTROL", "id":20536, "ctx":"initandlisten","msg":"Flow Control is enabled on this deployment"} {"t":{"$date":"2023-09-20T00:18:48.276+08:00"},"s":"I", "c":"FTDC", "id":20625, "ctx":"initandlisten","msg":"Initializing full-time diagnostic data capture","attr":{"dataDirectory":"./data/diagnostic.data"}} {"t":{"$date":"2023-09-20T00:18:48.284+08:00"},"s":"I", "c":"REPL", "id":6015317, "ctx":"initandlisten","msg":"Setting new configuration state","attr":{"newState":"ConfigReplicationDisabled","oldState":"ConfigPreStart"}} {"t":{"$date":"2023-09-20T00:18:48.284+08:00"},"s":"I", "c":"STORAGE", "id":22262, "ctx":"initandlisten","msg":"Timestamp monitor starting"} {"t":{"$date":"2023-09-20T00:18:48.290+08:00"},"s":"I", "c":"NETWORK", "id":23015, "ctx":"listener","msg":"Listening on","attr":{"address":"127.0.0.1"}} {"t":{"$date":"2023-09-20T00:18:48.290+08:00"},"s":"I", "c":"NETWORK", "id":23016, "ctx":"listener","msg":"Waiting for connections","attr":{"port":27017,"ssl":"off"}} {"t":{"$date":"2023-09-20T00:18:54.481+08:00"},"s":"I", "c":"NETWORK", "id":51800, "ctx":"conn2","msg":"client metadata","attr":{"remote":"127.0.0.1:54703","client":"conn2","doc":{"driver":{"name":"mongo-go-driver","version":"v1.11.4"},"os":{"type":"windows","architecture":"amd64"},"platform":"go1.19.2"}}} {"t":{"$date":"2023-09-20T00:18:54.481+08:00"},"s":"I", "c":"NETWORK", "id":51800, "ctx":"conn1","msg":"client metadata","attr":{"remote":"127.0.0.1:54704","client":"conn1","doc":{"driver":{"name":"mongo-go-driver","version":"v1.11.4"},"os":{"type":"windows","architecture":"amd64"},"platform":"go1.19.2"}}} {"t":{"$date":"2023-09-20T00:18:54.807+08:00"},"s":"I", "c":"NETWORK", "id":51800, "ctx":"conn3","msg":"client metadata","attr":{"remote":"127.0.0.1:54705","client":"conn3","doc":{"application":{"name":"Navicat"},"driver":{"name":"mongoc","version":"1.21.1"},"os":{"type":"Windows","name":"Windows","version":"6.2 (9200)","architecture":"x86_64"},"platform":"cfg=0x02041700e9 CC=MSVC 1900 CFLAGS=\"/DWIN32 /D_WINDOWS /W3\" LDFLAGS=\"/machine:x64\""}}} {"t":{"$date":"2023-09-20T00:19:36.579+08:00"},"s":"I", "c":"NETWORK", "id":51800, "ctx":"conn4","msg":"client metadata","attr":{"remote":"127.0.0.1:54765","client":"conn4","doc":{"driver":{"name":"mongo-go-driver","version":"v1.11.4"},"os":{"type":"windows","architecture":"amd64"},"platform":"go1.19.2"}}} {"t":{"$date":"2023-09-20T00:19:36.579+08:00"},"s":"I", "c":"NETWORK", "id":6788700, "ctx":"conn4","msg":"Received first command on ingress connection since session start or auth handshake","attr":{"elapsedMillis":0}} {"t":{"$date":"2023-09-20T00:19:36.601+08:00"},"s":"I", "c":"NETWORK", "id":51800, "ctx":"conn5","msg":"client metadata","attr":{"remote":"127.0.0.1:54766","client":"conn5","doc":{"driver":{"name":"mongo-go-driver","version":"v1.11.4"},"os":{"type":"windows","architecture":"amd64"},"platform":"go1.19.2"}}} {"t":{"$date":"2023-09-20T00:19:38.935+08:00"},"s":"I", "c":"NETWORK", "id":6788700, "ctx":"conn5","msg":"Received first command on ingress connection since session start or auth handshake","attr":{"elapsedMillis":2334}} {"t":{"$date":"2023-09-20T00:28:11.642+08:00"},"s":"W", "c":"NETWORK", "id":4615610, "ctx":"conn1","msg":"Failed to check socket connectivity","attr":{"error":{"code":6,"codeName":"HostUnreachable","errmsg":"peekASIOStream :: caused by :: Connection reset by peer"}}} {"t":{"$date":"2023-09-20T00:28:11.642+08:00"},"s":"I", "c":"-", "id":20883, "ctx":"conn1","msg":"Interrupted operation as its client disconnected","attr":{"opId":7066}} {"t":{"$date":"2023-09-20T00:28:11.643+08:00"},"s":"I", "c":"EXECUTOR", "id":22989, "ctx":"conn1","msg":"Error sending response to client. Ending connection from remote","attr":{"error":{"code":6,"codeName":"HostUnreachable","errmsg":"futurize :: caused by :: Connection reset by peer"},"remote":"127.0.0.1:54704","connectionId":1}} {"t":{"$date":"2023-09-20T00:29:04.374+08:00"},"s":"I", "c":"NETWORK", "id":51800, "ctx":"conn6","msg":"client metadata","attr":{"remote":"127.0.0.1:55574","client":"conn6","doc":{"driver":{"name":"mongo-go-driver","version":"v1.11.4"},"os":{"type":"windows","architecture":"amd64"},"platform":"go1.19.2"}}} {"t":{"$date":"2023-09-20T00:29:04.375+08:00"},"s":"I", "c":"NETWORK", "id":51800, "ctx":"conn7","msg":"client metadata","attr":{"remote":"127.0.0.1:55573","client":"conn7","doc":{"driver":{"name":"mongo-go-driver","version":"v1.11.4"},"os":{"type":"windows","architecture":"amd64"},"platform":"go1.19.2"}}} {"t":{"$date":"2023-09-20T00:29:04.679+08:00"},"s":"I", "c":"NETWORK", "id":51800, "ctx":"conn8","msg":"client metadata","attr":{"remote":"127.0.0.1:55577","client":"conn8","doc":{"driver":{"name":"mongo-go-driver","version":"v1.11.4"},"os":{"type":"windows","architecture":"amd64"},"platform":"go1.19.2"}}} {"t":{"$date":"2023-09-20T00:29:04.680+08:00"},"s":"I", "c":"NETWORK", "id":6788700, "ctx":"conn8","msg":"Received first command on ingress connection since session start or auth handshake","attr":{"elapsedMillis":0}} {"t":{"$date":"2023-09-20T00:29:04.984+08:00"},"s":"I", "c":"NETWORK", "id":51800, "ctx":"conn10","msg":"client metadata","attr":{"remote":"127.0.0.1:55581","client":"conn10","doc":{"driver":{"name":"mongo-go-driver","version":"v1.11.4"},"os":{"type":"windows","architecture":"amd64"},"platform":"go1.19.2"}}} {"t":{"$date":"2023-09-20T00:29:04.984+08:00"},"s":"I", "c":"NETWORK", "id":51800, "ctx":"conn9","msg":"client metadata","attr":{"remote":"127.0.0.1:55580","client":"conn9","doc":{"driver":{"name":"mongo-go-driver","version":"v1.11.4"},"os":{"type":"windows","architecture":"amd64"},"platform":"go1.19.2"}}} {"t":{"$date":"2023-09-20T00:29:05.036+08:00"},"s":"I", "c":"NETWORK", "id":6788700, "ctx":"conn10","msg":"Received first command on ingress connection since session start or auth handshake","attr":{"elapsedMillis":51}} {"t":{"$date":"2023-09-20T00:29:05.036+08:00"},"s":"I", "c":"NETWORK", "id":6788700, "ctx":"conn9","msg":"Received first command on ingress connection since session start or auth handshake","attr":{"elapsedMillis":52}} {"t":{"$date":"2023-09-20T00:30:25.485+08:00"},"s":"W", "c":"NETWORK", "id":4615610, "ctx":"conn7","msg":"Failed to check socket connectivity","attr":{"error":{"code":6,"codeName":"HostUnreachable","errmsg":"peekASIOStream :: caused by :: Connection reset by peer"}}} {"t":{"$date":"2023-09-20T00:30:25.485+08:00"},"s":"I", "c":"-", "id":20883, "ctx":"conn7","msg":"Interrupted operation as its client disconnected","attr":{"opId":8918}} {"t":{"$date":"2023-09-20T00:30:25.486+08:00"},"s":"I", "c":"EXECUTOR", "id":22989, "ctx":"conn7","msg":"Error sending response to client. Ending connection from remote","attr":{"error":{"code":6,"codeName":"HostUnreachable","errmsg":"futurize :: caused by :: Connection reset by peer"},"remote":"127.0.0.1:55573","connectionId":7}} {"t":{"$date":"2023-09-20T00:30:42.626+08:00"},"s":"I", "c":"NETWORK", "id":51800, "ctx":"conn11","msg":"client metadata","attr":{"remote":"127.0.0.1:55718","client":"conn11","doc":{"driver":{"name":"mongo-go-driver","version":"v1.11.4"},"os":{"type":"windows","architecture":"amd64"},"platform":"go1.19.2"}}} {"t":{"$date":"2023-09-20T00:30:42.626+08:00"},"s":"I", "c":"NETWORK", "id":51800, "ctx":"conn12","msg":"client metadata","attr":{"remote":"127.0.0.1:55719","client":"conn12","doc":{"driver":{"name":"mongo-go-driver","version":"v1.11.4"},"os":{"type":"windows","architecture":"amd64"},"platform":"go1.19.2"}}} {"t":{"$date":"2023-09-20T00:30:42.930+08:00"},"s":"I", "c":"NETWORK", "id":51800, "ctx":"conn13","msg":"client metadata","attr":{"remote":"127.0.0.1:55721","client":"conn13","doc":{"driver":{"name":"mongo-go-driver","version":"v1.11.4"},"os":{"type":"windows","architecture":"amd64"},"platform":"go1.19.2"}}} {"t":{"$date":"2023-09-20T00:30:42.930+08:00"},"s":"I", "c":"NETWORK", "id":6788700, "ctx":"conn13","msg":"Received first command on ingress connection since session start or auth handshake","attr":{"elapsedMillis":0}} {"t":{"$date":"2023-09-20T00:30:43.300+08:00"},"s":"I", "c":"NETWORK", "id":51800, "ctx":"conn14","msg":"client metadata","attr":{"remote":"127.0.0.1:55725","client":"conn14","doc":{"driver":{"name":"mongo-go-driver","version":"v1.11.4"},"os":{"type":"windows","architecture":"amd64"},"platform":"go1.19.2"}}} {"t":{"$date":"2023-09-20T00:30:43.328+08:00"},"s":"I", "c":"NETWORK", "id":6788700, "ctx":"conn14","msg":"Received first command on ingress connection since session start or auth handshake","attr":{"elapsedMillis":27}} {"t":{"$date":"2023-09-20T00:30:43.349+08:00"},"s":"I", "c":"NETWORK", "id":51800, "ctx":"conn15","msg":"client metadata","attr":{"remote":"127.0.0.1:55726","client":"conn15","doc":{"driver":{"name":"mongo-go-driver","version":"v1.11.4"},"os":{"type":"windows","architecture":"amd64"},"platform":"go1.19.2"}}} {"t":{"$date":"2023-09-20T00:30:43.437+08:00"},"s":"I", "c":"NETWORK", "id":6788700, "ctx":"conn15","msg":"Received first command on ingress connection since session start or auth handshake","attr":{"elapsedMillis":88}} {"t":{"$date":"2023-09-20T00:32:52.636+08:00"},"s":"W", "c":"NETWORK", "id":4615610, "ctx":"conn12","msg":"Failed to check socket connectivity","attr":{"error":{"code":6,"codeName":"HostUnreachable","errmsg":"peekASIOStream :: caused by :: Connection reset by peer"}}} {"t":{"$date":"2023-09-20T00:32:52.636+08:00"},"s":"I", "c":"-", "id":20883, "ctx":"conn12","msg":"Interrupted operation as its client disconnected","attr":{"opId":10755}} {"t":{"$date":"2023-09-20T00:32:52.636+08:00"},"s":"I", "c":"EXECUTOR", "id":22989, "ctx":"conn12","msg":"Error sending response to client. Ending connection from remote","attr":{"error":{"code":6,"codeName":"HostUnreachable","errmsg":"futurize :: caused by :: Connection reset by peer"},"remote":"127.0.0.1:55719","connectionId":12}} {"t":{"$date":"2023-09-20T00:33:17.965+08:00"},"s":"I", "c":"NETWORK", "id":51800, "ctx":"conn16","msg":"client metadata","attr":{"remote":"127.0.0.1:55935","client":"conn16","doc":{"driver":{"name":"mongo-go-driver","version":"v1.11.4"},"os":{"type":"windows","architecture":"amd64"},"platform":"go1.19.2"}}} {"t":{"$date":"2023-09-20T00:33:17.965+08:00"},"s":"I", "c":"NETWORK", "id":51800, "ctx":"conn17","msg":"client metadata","attr":{"remote":"127.0.0.1:55936","client":"conn17","doc":{"driver":{"name":"mongo-go-driver","version":"v1.11.4"},"os":{"type":"windows","architecture":"amd64"},"platform":"go1.19.2"}}} {"t":{"$date":"2023-09-20T00:33:18.268+08:00"},"s":"I", "c":"NETWORK", "id":51800, "ctx":"conn18","msg":"client metadata","attr":{"remote":"127.0.0.1:55938","client":"conn18","doc":{"driver":{"name":"mongo-go-driver","version":"v1.11.4"},"os":{"type":"windows","architecture":"amd64"},"platform":"go1.19.2"}}} {"t":{"$date":"2023-09-20T00:33:18.269+08:00"},"s":"I", "c":"NETWORK", "id":6788700, "ctx":"conn18","msg":"Received first command on ingress connection since session start or auth handshake","attr":{"elapsedMillis":0}} {"t":{"$date":"2023-09-20T00:33:18.703+08:00"},"s":"I", "c":"NETWORK", "id":51800, "ctx":"conn19","msg":"client metadata","attr":{"remote":"127.0.0.1:55941","client":"conn19","doc":{"driver":{"name":"mongo-go-driver","version":"v1.11.4"},"os":{"type":"windows","architecture":"amd64"},"platform":"go1.19.2"}}} {"t":{"$date":"2023-09-20T00:33:18.754+08:00"},"s":"I", "c":"NETWORK", "id":6788700, "ctx":"conn19","msg":"Received first command on ingress connection since session start or auth handshake","attr":{"elapsedMillis":51}} {"t":{"$date":"2023-09-20T00:37:13.096+08:00"},"s":"W", "c":"NETWORK", "id":4615610, "ctx":"conn17","msg":"Failed to check socket connectivity","attr":{"error":{"code":6,"codeName":"HostUnreachable","errmsg":"peekASIOStream :: caused by :: Connection reset by peer"}}} {"t":{"$date":"2023-09-20T00:37:13.096+08:00"},"s":"I", "c":"-", "id":20883, "ctx":"conn17","msg":"Interrupted operation as its client disconnected","attr":{"opId":14191}} {"t":{"$date":"2023-09-20T00:37:13.096+08:00"},"s":"I", "c":"EXECUTOR", "id":22989, "ctx":"conn17","msg":"Error sending response to client. Ending connection from remote","attr":{"error":{"code":6,"codeName":"HostUnreachable","errmsg":"futurize :: caused by :: Connection reset by peer"},"remote":"127.0.0.1:55936","connectionId":17}} {"t":{"$date":"2023-09-20T00:37:18.555+08:00"},"s":"I", "c":"NETWORK", "id":51800, "ctx":"conn20","msg":"client metadata","attr":{"remote":"127.0.0.1:56289","client":"conn20","doc":{"driver":{"name":"mongo-go-driver","version":"v1.11.4"},"os":{"type":"windows","architecture":"amd64"},"platform":"go1.19.2"}}} {"t":{"$date":"2023-09-20T00:37:18.555+08:00"},"s":"I", "c":"NETWORK", "id":51800, "ctx":"conn21","msg":"client metadata","attr":{"remote":"127.0.0.1:56290","client":"conn21","doc":{"driver":{"name":"mongo-go-driver","version":"v1.11.4"},"os":{"type":"windows","architecture":"amd64"},"platform":"go1.19.2"}}} {"t":{"$date":"2023-09-20T00:37:18.861+08:00"},"s":"I", "c":"NETWORK", "id":51800, "ctx":"conn22","msg":"client metadata","attr":{"remote":"127.0.0.1:56293","client":"conn22","doc":{"driver":{"name":"mongo-go-driver","version":"v1.11.4"},"os":{"type":"windows","architecture":"amd64"},"platform":"go1.19.2"}}} {"t":{"$date":"2023-09-20T00:37:18.861+08:00"},"s":"I", "c":"NETWORK", "id":6788700, "ctx":"conn22","msg":"Received first command on ingress connection since session start or auth handshake","attr":{"elapsedMillis":0}} {"t":{"$date":"2023-09-20T00:37:19.220+08:00"},"s":"I", "c":"NETWORK", "id":51800, "ctx":"conn23","msg":"client metadata","attr":{"remote":"127.0.0.1:56297","client":"conn23","doc":{"driver":{"name":"mongo-go-driver","version":"v1.11.4"},"os":{"type":"windows","architecture":"amd64"},"platform":"go1.19.2"}}} {"t":{"$date":"2023-09-20T00:37:19.236+08:00"},"s":"I", "c":"NETWORK", "id":6788700, "ctx":"conn23","msg":"Received first command on ingress connection since session start or auth handshake","attr":{"elapsedMillis":16}} {"t":{"$date":"2023-09-20T00:37:19.277+08:00"},"s":"I", "c":"NETWORK", "id":51800, "ctx":"conn24","msg":"client metadata","attr":{"remote":"127.0.0.1:56298","client":"conn24","doc":{"driver":{"name":"mongo-go-driver","version":"v1.11.4"},"os":{"type":"windows","architecture":"amd64"},"platform":"go1.19.2"}}} {"t":{"$date":"2023-09-20T00:37:19.407+08:00"},"s":"I", "c":"NETWORK", "id":6788700, "ctx":"conn24","msg":"Received first command on ingress connection since session start or auth handshake","attr":{"elapsedMillis":129}} {"t":{"$date":"2023-09-20T00:53:53.205+08:00"},"s":"F", "c":"CONTROL", "id":6384300, "ctx":"ftdc","msg":"Writing fatal message","attr":{"message":"terminate() called. An exception is active; attempting to gather more information\n"}} {"t":{"$date":"2023-09-20T00:53:53.205+08:00"},"s":"F", "c":"CONTROL", "id":6384300, "ctx":"ftdc","msg":"Writing fatal message","attr":{"message":"DBException::toString(): FileRenameFailed: \ufffdܾ\ufffd\ufffd\ufffd\ufffdʡ\ufffd\nActual exception type: class mongo::error_details::ExceptionForImpl\n\n"}} {"t":{"$date":"2023-09-20T00:53:53.205+08:00"},"s":"F", "c":"CONTROL", "id":6384300, "ctx":"ftdc","msg":"Writing fatal message","attr":{"message":"\n"}} {"t":{"$date":"2023-09-20T00:53:53.340+08:00"},"s":"I", "c":"COMMAND", "id":51803, "ctx":"LogicalSessionCacheRefresh","msg":"Slow query","attr":{"type":"command","ns":"config.$cmd","command":{"update":"system.sessions","ordered":false,"writeConcern":{"w":"majority","wtimeout":15000},"$db":"config"},"numYields":0,"reslen":60,"locks":{"ParallelBatchWriterMode":{"acquireCount":{"r":1}},"FeatureCompatibilityVersion":{"acquireCount":{"w":1}},"ReplicationStateTransition":{"acquireCount":{"w":1}},"Global":{"acquireCount":{"w":1}},"Database":{"acquireCount":{"w":1}},"Collection":{"acquireCount":{"w":1}}},"flowControl":{"acquireCount":1},"writeConcern":{"w":"majority","wtimeout":15000,"provenance":"clientSupplied"},"waitForWriteConcernDurationMillis":4997,"storage":{},"protocol":"op_msg","durationMillis":4998}} {"t":{"$date":"2023-09-20T00:53:54.182+08:00"},"s":"I", "c":"CONTROL", "id":31380, "ctx":"ftdc","msg":"BACKTRACE","attr":{"bt":{"backtrace":[{"a":"7FF6AD11A081","module":"mongod.exe","file":".../src/mongo/util/stacktrace_windows.cpp","line":336,"s":"mongo::printStackTrace","s+":"41"},{"a":"7FF6AD11D848","module":"mongod.exe","file":".../src/mongo/util/signal_handlers_synchronous.cpp","line":261,"s":"mongo::`anonymous namespace'::myTerminate","s+":"E8"},{"a":"7FF6AD1CF9E7","module":"mongod.exe","file":".../src/mongo/stdx/set_terminate_internals.cpp","line":87,"s":"mongo::stdx::dispatch_impl","s+":"17"},{"a":"7FF6AD1CF9C9","module":"mongod.exe","file":".../src/mongo/stdx/set_terminate_internals.cpp","line":91,"s":"mongo::stdx::TerminateHandlerDetailsInterface::dispatch","s+":"9"},{"a":"7FFBB96CDE58","module":"ucrtbase.dll","s":"terminate","s+":"18"},{"a":"7FFBB1891AAB","module":"VCRUNTIME140_1.dll","s":"_NLG_Return2","s+":"95B"},{"a":"7FFBB1892317","module":"VCRUNTIME140_1.dll","s":"_NLG_Return2","s+":"11C7"},{"a":"7FFBB1894119","module":"VCRUNTIME140_1.dll","s":"_CxxFrameHandler4","s+":"A9"},{"a":"7FF6AD40CC5C","module":"mongod.exe","file":"d:/a01/_work/43/s/src/vctools/crt/vcstartup/src/gs/amd64/gshandlereh4.cpp","line":86,"s":"__GSHandlerCheck_EH4","s+":"64"},{"a":"7FFBBC93582F","module":"ntdll.dll","s":"_chkstk","s+":"11F"},{"a":"7FFBBC894CEF","module":"ntdll.dll","s":"RtlWalkFrameChain","s+":"14BF"},{"a":"7FFBBC898AE6","module":"ntdll.dll","s":"RtlRaiseException","s+":"316"},{"a":"7FFBB8954859","module":"KERNELBASE.dll","s":"RaiseException","s+":"69"},{"a":"7FFBA96166C0","module":"VCRUNTIME140.dll","s":"CxxThrowException","s+":"90"},{"a":"7FF6AD19587D","module":"mongod.exe","file":"C:/data/mci/6c3eb76a3d93e77b7c1801902f120032/src/build/opt/mongo/base/error_codes.cpp","line":2604,"s":"mongo::error_details::throwExceptionForStatus","s+":"42D"},{"a":"7FF6AD12737E","module":"mongod.exe","file":".../src/mongo/util/assert_util.cpp","line":282,"s":"mongo::uassertedWithLocation","s+":"19E"},{"a":"7FF6AB289A00","module":"mongod.exe","file":".../src/mongo/db/ftdc/controller.cpp","line":256,"s":"mongo::FTDCController::doLoop","s+":"5B0"},{"a":"7FF6AB2892EC","module":"mongod.exe","file":"C:/Program Files/Microsoft Visual Studio/2022/Professional/VC/Tools/MSVC/14.31.31103/include/thread","line":56,"s":"std::thread::_Invoke,0>'::`1':: >,0>","s+":"2C"},{"a":"7FFBB968268A","module":"ucrtbase.dll","s":"o_exp","s+":"5A"},{"a":"7FFBBC6E7974","module":"KERNEL32.DLL","s":"BaseThreadInitThunk","s+":"14"}]}},"tags":[]} {"t":{"$date":"2023-09-20T00:53:54.182+08:00"},"s":"I", "c":"CONTROL", "id":31445, "ctx":"ftdc","msg":"Frame","attr":{"frame":{"a":"7FF6AD11A081","module":"mongod.exe","file":".../src/mongo/util/stacktrace_windows.cpp","line":336,"s":"mongo::printStackTrace","s+":"41"}}} {"t":{"$date":"2023-09-20T00:53:54.182+08:00"},"s":"I", "c":"CONTROL", "id":31445, "ctx":"ftdc","msg":"Frame","attr":{"frame":{"a":"7FF6AD11D848","module":"mongod.exe","file":".../src/mongo/util/signal_handlers_synchronous.cpp","line":261,"s":"mongo::`anonymous namespace'::myTerminate","s+":"E8"}}} {"t":{"$date":"2023-09-20T00:53:54.182+08:00"},"s":"I", "c":"CONTROL", "id":31445, "ctx":"ftdc","msg":"Frame","attr":{"frame":{"a":"7FF6AD1CF9E7","module":"mongod.exe","file":".../src/mongo/stdx/set_terminate_internals.cpp","line":87,"s":"mongo::stdx::dispatch_impl","s+":"17"}}} {"t":{"$date":"2023-09-20T00:53:54.182+08:00"},"s":"I", "c":"CONTROL", "id":31445, "ctx":"ftdc","msg":"Frame","attr":{"frame":{"a":"7FF6AD1CF9C9","module":"mongod.exe","file":".../src/mongo/stdx/set_terminate_internals.cpp","line":91,"s":"mongo::stdx::TerminateHandlerDetailsInterface::dispatch","s+":"9"}}} {"t":{"$date":"2023-09-20T00:53:54.182+08:00"},"s":"I", "c":"CONTROL", "id":31445, "ctx":"ftdc","msg":"Frame","attr":{"frame":{"a":"7FFBB96CDE58","module":"ucrtbase.dll","s":"terminate","s+":"18"}}} {"t":{"$date":"2023-09-20T00:53:54.182+08:00"},"s":"I", "c":"CONTROL", "id":31445, "ctx":"ftdc","msg":"Frame","attr":{"frame":{"a":"7FFBB1891AAB","module":"VCRUNTIME140_1.dll","s":"_NLG_Return2","s+":"95B"}}} {"t":{"$date":"2023-09-20T00:53:54.182+08:00"},"s":"I", "c":"CONTROL", "id":31445, "ctx":"ftdc","msg":"Frame","attr":{"frame":{"a":"7FFBB1892317","module":"VCRUNTIME140_1.dll","s":"_NLG_Return2","s+":"11C7"}}} {"t":{"$date":"2023-09-20T00:53:54.182+08:00"},"s":"I", "c":"CONTROL", "id":31445, "ctx":"ftdc","msg":"Frame","attr":{"frame":{"a":"7FFBB1894119","module":"VCRUNTIME140_1.dll","s":"_CxxFrameHandler4","s+":"A9"}}} {"t":{"$date":"2023-09-20T00:53:54.182+08:00"},"s":"I", "c":"CONTROL", "id":31445, "ctx":"ftdc","msg":"Frame","attr":{"frame":{"a":"7FF6AD40CC5C","module":"mongod.exe","file":"d:/a01/_work/43/s/src/vctools/crt/vcstartup/src/gs/amd64/gshandlereh4.cpp","line":86,"s":"__GSHandlerCheck_EH4","s+":"64"}}} {"t":{"$date":"2023-09-20T00:53:54.182+08:00"},"s":"I", "c":"CONTROL", "id":31445, "ctx":"ftdc","msg":"Frame","attr":{"frame":{"a":"7FFBBC93582F","module":"ntdll.dll","s":"_chkstk","s+":"11F"}}} {"t":{"$date":"2023-09-20T00:53:54.182+08:00"},"s":"I", "c":"CONTROL", "id":31445, "ctx":"ftdc","msg":"Frame","attr":{"frame":{"a":"7FFBBC894CEF","module":"ntdll.dll","s":"RtlWalkFrameChain","s+":"14BF"}}} {"t":{"$date":"2023-09-20T00:53:54.182+08:00"},"s":"I", "c":"CONTROL", "id":31445, "ctx":"ftdc","msg":"Frame","attr":{"frame":{"a":"7FFBBC898AE6","module":"ntdll.dll","s":"RtlRaiseException","s+":"316"}}} {"t":{"$date":"2023-09-20T00:53:54.182+08:00"},"s":"I", "c":"CONTROL", "id":31445, "ctx":"ftdc","msg":"Frame","attr":{"frame":{"a":"7FFBB8954859","module":"KERNELBASE.dll","s":"RaiseException","s+":"69"}}} {"t":{"$date":"2023-09-20T00:53:54.182+08:00"},"s":"I", "c":"CONTROL", "id":31445, "ctx":"ftdc","msg":"Frame","attr":{"frame":{"a":"7FFBA96166C0","module":"VCRUNTIME140.dll","s":"CxxThrowException","s+":"90"}}} {"t":{"$date":"2023-09-20T00:53:54.182+08:00"},"s":"I", "c":"CONTROL", "id":31445, "ctx":"ftdc","msg":"Frame","attr":{"frame":{"a":"7FF6AD19587D","module":"mongod.exe","file":"C:/data/mci/6c3eb76a3d93e77b7c1801902f120032/src/build/opt/mongo/base/error_codes.cpp","line":2604,"s":"mongo::error_details::throwExceptionForStatus","s+":"42D"}}} {"t":{"$date":"2023-09-20T00:53:54.182+08:00"},"s":"I", "c":"CONTROL", "id":31445, "ctx":"ftdc","msg":"Frame","attr":{"frame":{"a":"7FF6AD12737E","module":"mongod.exe","file":".../src/mongo/util/assert_util.cpp","line":282,"s":"mongo::uassertedWithLocation","s+":"19E"}}} {"t":{"$date":"2023-09-20T00:53:54.182+08:00"},"s":"I", "c":"CONTROL", "id":31445, "ctx":"ftdc","msg":"Frame","attr":{"frame":{"a":"7FF6AB289A00","module":"mongod.exe","file":".../src/mongo/db/ftdc/controller.cpp","line":256,"s":"mongo::FTDCController::doLoop","s+":"5B0"}}} {"t":{"$date":"2023-09-20T00:53:54.182+08:00"},"s":"I", "c":"CONTROL", "id":31445, "ctx":"ftdc","msg":"Frame","attr":{"frame":{"a":"7FF6AB2892EC","module":"mongod.exe","file":"C:/Program Files/Microsoft Visual Studio/2022/Professional/VC/Tools/MSVC/14.31.31103/include/thread","line":56,"s":"std::thread::_Invoke,0>'::`1':: >,0>","s+":"2C"}}} {"t":{"$date":"2023-09-20T00:53:54.182+08:00"},"s":"I", "c":"CONTROL", "id":31445, "ctx":"ftdc","msg":"Frame","attr":{"frame":{"a":"7FFBB968268A","module":"ucrtbase.dll","s":"o_exp","s+":"5A"}}} {"t":{"$date":"2023-09-20T00:53:54.182+08:00"},"s":"I", "c":"CONTROL", "id":31445, "ctx":"ftdc","msg":"Frame","attr":{"frame":{"a":"7FFBBC6E7974","module":"KERNEL32.DLL","s":"BaseThreadInitThunk","s+":"14"}}} {"t":{"$date":"2023-09-20T00:53:54.182+08:00"},"s":"F", "c":"CONTROL", "id":23134, "ctx":"ftdc","msg":"Unhandled exception","attr":{"exceptionString":"0xE0000001","addressString":"0x00007FFBB8954859"}} {"t":{"$date":"2023-09-20T00:53:54.182+08:00"},"s":"F", "c":"CONTROL", "id":23136, "ctx":"ftdc","msg":"*** stack trace for unhandled exception:"} {"t":{"$date":"2023-09-20T00:53:54.185+08:00"},"s":"I", "c":"CONTROL", "id":31380, "ctx":"ftdc","msg":"BACKTRACE","attr":{"bt":{"backtrace":[{"a":"7FFBB8954859","module":"KERNELBASE.dll","s":"RaiseException","s+":"69"},{"a":"7FF6AD11CEC9","module":"mongod.exe","file":".../src/mongo/util/signal_handlers_synchronous.cpp","line":103,"s":"mongo::`anonymous namespace'::endProcessWithSignal","s+":"19"},{"a":"7FF6AD11D857","module":"mongod.exe","file":".../src/mongo/util/signal_handlers_synchronous.cpp","line":262,"s":"mongo::`anonymous namespace'::myTerminate","s+":"F7"},{"a":"7FF6AD1CF9E7","module":"mongod.exe","file":".../src/mongo/stdx/set_terminate_internals.cpp","line":87,"s":"mongo::stdx::dispatch_impl","s+":"17"},{"a":"7FF6AD1CF9C9","module":"mongod.exe","file":".../src/mongo/stdx/set_terminate_internals.cpp","line":91,"s":"mongo::stdx::TerminateHandlerDetailsInterface::dispatch","s+":"9"},{"a":"7FFBB96CDE58","module":"ucrtbase.dll","s":"terminate","s+":"18"},{"a":"7FFBB1891AAB","module":"VCRUNTIME140_1.dll","s":"_NLG_Return2","s+":"95B"},{"a":"7FFBB1892317","module":"VCRUNTIME140_1.dll","s":"_NLG_Return2","s+":"11C7"},{"a":"7FFBB1894119","module":"VCRUNTIME140_1.dll","s":"_CxxFrameHandler4","s+":"A9"},{"a":"7FF6AD40CC5C","module":"mongod.exe","file":"d:/a01/_work/43/s/src/vctools/crt/vcstartup/src/gs/amd64/gshandlereh4.cpp","line":86,"s":"__GSHandlerCheck_EH4","s+":"64"},{"a":"7FFBBC93582F","module":"ntdll.dll","s":"_chkstk","s+":"11F"},{"a":"7FFBBC894CEF","module":"ntdll.dll","s":"RtlWalkFrameChain","s+":"14BF"},{"a":"7FFBBC898AE6","module":"ntdll.dll","s":"RtlRaiseException","s+":"316"},{"a":"7FFBB8954859","module":"KERNELBASE.dll","s":"RaiseException","s+":"69"},{"a":"7FFBA96166C0","module":"VCRUNTIME140.dll","s":"CxxThrowException","s+":"90"},{"a":"7FF6AD19587D","module":"mongod.exe","file":"C:/data/mci/6c3eb76a3d93e77b7c1801902f120032/src/build/opt/mongo/base/error_codes.cpp","line":2604,"s":"mongo::error_details::throwExceptionForStatus","s+":"42D"},{"a":"7FF6AD12737E","module":"mongod.exe","file":".../src/mongo/util/assert_util.cpp","line":282,"s":"mongo::uassertedWithLocation","s+":"19E"},{"a":"7FF6AB289A00","module":"mongod.exe","file":".../src/mongo/db/ftdc/controller.cpp","line":256,"s":"mongo::FTDCController::doLoop","s+":"5B0"},{"a":"7FF6AB2892EC","module":"mongod.exe","file":"C:/Program Files/Microsoft Visual Studio/2022/Professional/VC/Tools/MSVC/14.31.31103/include/thread","line":56,"s":"std::thread::_Invoke,0>'::`1':: >,0>","s+":"2C"},{"a":"7FFBB968268A","module":"ucrtbase.dll","s":"o_exp","s+":"5A"},{"a":"7FFBBC6E7974","module":"KERNEL32.DLL","s":"BaseThreadInitThunk","s+":"14"}]}},"tags":[]} {"t":{"$date":"2023-09-20T00:53:54.185+08:00"},"s":"I", "c":"CONTROL", "id":31445, "ctx":"ftdc","msg":"Frame","attr":{"frame":{"a":"7FFBB8954859","module":"KERNELBASE.dll","s":"RaiseException","s+":"69"}}} {"t":{"$date":"2023-09-20T00:53:54.185+08:00"},"s":"I", "c":"CONTROL", "id":31445, "ctx":"ftdc","msg":"Frame","attr":{"frame":{"a":"7FF6AD11CEC9","module":"mongod.exe","file":".../src/mongo/util/signal_handlers_synchronous.cpp","line":103,"s":"mongo::`anonymous namespace'::endProcessWithSignal","s+":"19"}}} {"t":{"$date":"2023-09-20T00:53:54.185+08:00"},"s":"I", "c":"CONTROL", "id":31445, "ctx":"ftdc","msg":"Frame","attr":{"frame":{"a":"7FF6AD11D857","module":"mongod.exe","file":".../src/mongo/util/signal_handlers_synchronous.cpp","line":262,"s":"mongo::`anonymous namespace'::myTerminate","s+":"F7"}}} {"t":{"$date":"2023-09-20T00:53:54.185+08:00"},"s":"I", "c":"CONTROL", "id":31445, "ctx":"ftdc","msg":"Frame","attr":{"frame":{"a":"7FF6AD1CF9E7","module":"mongod.exe","file":".../src/mongo/stdx/set_terminate_internals.cpp","line":87,"s":"mongo::stdx::dispatch_impl","s+":"17"}}} {"t":{"$date":"2023-09-20T00:53:54.185+08:00"},"s":"I", "c":"CONTROL", "id":31445, "ctx":"ftdc","msg":"Frame","attr":{"frame":{"a":"7FF6AD1CF9C9","module":"mongod.exe","file":".../src/mongo/stdx/set_terminate_internals.cpp","line":91,"s":"mongo::stdx::TerminateHandlerDetailsInterface::dispatch","s+":"9"}}} {"t":{"$date":"2023-09-20T00:53:54.185+08:00"},"s":"I", "c":"CONTROL", "id":31445, "ctx":"ftdc","msg":"Frame","attr":{"frame":{"a":"7FFBB96CDE58","module":"ucrtbase.dll","s":"terminate","s+":"18"}}} {"t":{"$date":"2023-09-20T00:53:54.185+08:00"},"s":"I", "c":"CONTROL", "id":31445, "ctx":"ftdc","msg":"Frame","attr":{"frame":{"a":"7FFBB1891AAB","module":"VCRUNTIME140_1.dll","s":"_NLG_Return2","s+":"95B"}}} {"t":{"$date":"2023-09-20T00:53:54.185+08:00"},"s":"I", "c":"CONTROL", "id":31445, "ctx":"ftdc","msg":"Frame","attr":{"frame":{"a":"7FFBB1892317","module":"VCRUNTIME140_1.dll","s":"_NLG_Return2","s+":"11C7"}}} {"t":{"$date":"2023-09-20T00:53:54.185+08:00"},"s":"I", "c":"CONTROL", "id":31445, "ctx":"ftdc","msg":"Frame","attr":{"frame":{"a":"7FFBB1894119","module":"VCRUNTIME140_1.dll","s":"_CxxFrameHandler4","s+":"A9"}}} {"t":{"$date":"2023-09-20T00:53:54.185+08:00"},"s":"I", "c":"CONTROL", "id":31445, "ctx":"ftdc","msg":"Frame","attr":{"frame":{"a":"7FF6AD40CC5C","module":"mongod.exe","file":"d:/a01/_work/43/s/src/vctools/crt/vcstartup/src/gs/amd64/gshandlereh4.cpp","line":86,"s":"__GSHandlerCheck_EH4","s+":"64"}}} {"t":{"$date":"2023-09-20T00:53:54.185+08:00"},"s":"I", "c":"CONTROL", "id":31445, "ctx":"ftdc","msg":"Frame","attr":{"frame":{"a":"7FFBBC93582F","module":"ntdll.dll","s":"_chkstk","s+":"11F"}}} {"t":{"$date":"2023-09-20T00:53:54.185+08:00"},"s":"I", "c":"CONTROL", "id":31445, "ctx":"ftdc","msg":"Frame","attr":{"frame":{"a":"7FFBBC894CEF","module":"ntdll.dll","s":"RtlWalkFrameChain","s+":"14BF"}}} {"t":{"$date":"2023-09-20T00:53:54.185+08:00"},"s":"I", "c":"CONTROL", "id":31445, "ctx":"ftdc","msg":"Frame","attr":{"frame":{"a":"7FFBBC898AE6","module":"ntdll.dll","s":"RtlRaiseException","s+":"316"}}} {"t":{"$date":"2023-09-20T00:53:54.186+08:00"},"s":"I", "c":"CONTROL", "id":31445, "ctx":"ftdc","msg":"Frame","attr":{"frame":{"a":"7FFBB8954859","module":"KERNELBASE.dll","s":"RaiseException","s+":"69"}}} {"t":{"$date":"2023-09-20T00:53:54.186+08:00"},"s":"I", "c":"CONTROL", "id":31445, "ctx":"ftdc","msg":"Frame","attr":{"frame":{"a":"7FFBA96166C0","module":"VCRUNTIME140.dll","s":"CxxThrowException","s+":"90"}}} {"t":{"$date":"2023-09-20T00:53:54.186+08:00"},"s":"I", "c":"CONTROL", "id":31445, "ctx":"ftdc","msg":"Frame","attr":{"frame":{"a":"7FF6AD19587D","module":"mongod.exe","file":"C:/data/mci/6c3eb76a3d93e77b7c1801902f120032/src/build/opt/mongo/base/error_codes.cpp","line":2604,"s":"mongo::error_details::throwExceptionForStatus","s+":"42D"}}} {"t":{"$date":"2023-09-20T00:53:54.186+08:00"},"s":"I", "c":"CONTROL", "id":31445, "ctx":"ftdc","msg":"Frame","attr":{"frame":{"a":"7FF6AD12737E","module":"mongod.exe","file":".../src/mongo/util/assert_util.cpp","line":282,"s":"mongo::uassertedWithLocation","s+":"19E"}}} {"t":{"$date":"2023-09-20T00:53:54.186+08:00"},"s":"I", "c":"CONTROL", "id":31445, "ctx":"ftdc","msg":"Frame","attr":{"frame":{"a":"7FF6AB289A00","module":"mongod.exe","file":".../src/mongo/db/ftdc/controller.cpp","line":256,"s":"mongo::FTDCController::doLoop","s+":"5B0"}}} {"t":{"$date":"2023-09-20T00:53:54.186+08:00"},"s":"I", "c":"CONTROL", "id":31445, "ctx":"ftdc","msg":"Frame","attr":{"frame":{"a":"7FF6AB2892EC","module":"mongod.exe","file":"C:/Program Files/Microsoft Visual Studio/2022/Professional/VC/Tools/MSVC/14.31.31103/include/thread","line":56,"s":"std::thread::_Invoke,0>'::`1':: >,0>","s+":"2C"}}} {"t":{"$date":"2023-09-20T00:53:54.186+08:00"},"s":"I", "c":"CONTROL", "id":31445, "ctx":"ftdc","msg":"Frame","attr":{"frame":{"a":"7FFBB968268A","module":"ucrtbase.dll","s":"o_exp","s+":"5A"}}} {"t":{"$date":"2023-09-20T00:53:54.186+08:00"},"s":"I", "c":"CONTROL", "id":31445, "ctx":"ftdc","msg":"Frame","attr":{"frame":{"a":"7FFBBC6E7974","module":"KERNEL32.DLL","s":"BaseThreadInitThunk","s+":"14"}}} {"t":{"$date":"2023-09-20T00:53:54.187+08:00"},"s":"I", "c":"CONTROL", "id":23132, "ctx":"ftdc","msg":"Writing minidump diagnostic file","attr":{"dumpName":"C:\\Users\\Administrator\\Desktop\\DDMSystem\\mongodb7.0.1\\bin\\mongod.2023-09-19T16-53-54.mdmp"}} {"t":{"$date":"2023-09-20T00:53:54.408+08:00"},"s":"F", "c":"CONTROL", "id":23137, "ctx":"ftdc","msg":"*** immediate exit due to unhandled exception"}

13
1
0
浏览量359
感觉对了

mongodb 本地访问与远程访问的数据一致性问题?

各位大佬好,我的问题是这样: 我有一个服务器,我在服务器里自己安装了一个mongodb,假设外网访问是131.131.131.131:30303. 然后我在家里通过访问这个ip就能拿到。 而我在这个服务器上部署了一个服务,连接这个mongo使用的是127.0.0.1:30303。 但发生了异常:我在家里通过ip远程(程序访问)访问,拿回来700多的数据。 但是 在服务上的程序通过127访问mongo只能拿到500多的数据。 两套配置文件除了访问的ip不同以外,其他db名,用户名密码完全一致。 更神奇的是,我在服务器上把配置文件中mongo的访问ip也改成外网ip,依然只能拿到500多的数据。 我实在高不清这个现象和原因是什么,求各位大佬指点迷津!我用的是golang。 mongo-go-driver

14
1
0
浏览量312
无事小神仙

既然有MySQL了,为什么还要有MongoDB?

最近项目在使用MongoDB作为图片和文档的存储数据库,为啥不直接存MySQL里,还要搭个MongoDB集群,麻不麻烦?让我们一起,一探究竟,了解一下MongoDB的特点和基本用法,实现快速入门,丰富个人简历,提高面试level,给自己增加一点谈资,秒变面试小达人,BAT不是梦。三分钟你将学会:MongoDB主要特征MongoDB优缺点,扬长避短何时选择MongoDB?为啥要用它?MongoDB与MySQL关键字对比下载与安装过程中一些常见的坑Java整合MongoDB,实现农民工增删改查一、基本概念走起MongoDB是一款开源、跨平台、分布式,具有大数据处理能力的文档存储数据库。文档数据库MongoDB用于记录文档结构的数据,比如JSON、XML结构的数据。二、MongoDB的主要特征高性能。提供JSON、XML等可嵌入数据快速处理功能,提供文档的索引功能,以提高查询速度;丰富的查询语言。为数据聚合、结构文档、地理空间提供丰富的查询功能;高可用性。提供自动故障转移和数据冗余处理功能;水平扩展能力。提供基于多服务器集群的分布式数据处理能力,具体处理时分主从和权衡(基于Hash自动推选)两种处理模式;支持多种存储引擎。MongoDB提供多种存储引擎,WiredTiger引擎、MMAPv1引擎是基于硬盘读写的存储引擎,In-Memory引擎是基于内存的存储引擎;三、MongoDB优缺点,扬长避短1、优点Free-schema无模式文档,适应非结构化数据存储;内置GridFS,支持大容量的存储;内置Sharding,分片简单弱一致性(最终一致),更能保证用户的访问速度;查询性能优越,对于千万级别的文档对象,差不多10个G,对有索引的ID的查询不会比MySQL慢,而对非索引字段的查询,则是完胜MySQL;聚合框架,它支持典型几种聚合操作 , 比如,Aggregate pipelien, Map-Reduce等;支持自动故障恢复2、缺点太吃内存,快是有原因的,因为MongoDB把数据都放内存里了;对事务的支持不够友好;占用空间过大;对联表查询的支持不够强大;只有最终一致性,言外之意,就是可能造成数据的不一致,如果想要保持强一致性,必须在一个服务器处理所有的读写操作,坑;复杂聚合操作通过mapreduce创建,速度慢;Mongodb全局锁机制也是个坑;预分配模式会带来的磁盘瓶颈;删除记录时不会释放空间,相当于逻辑删除,这个真的坑;MongoDB到现在为止,好像还没有太好用的客户端工具;四、何时选择MongoDB?为啥要用它?1、MongoDB事务MongoDB目前只支持单文档事务,MongoDB暂时不适合需要复杂事务的场景。 灵活的文档模型JSON格式存储最接近真实对象模型,对开发者友好,方便快速开发迭代,可用复制集满足数据高可靠、高可用的需求,运维较为简单、故障自动切换可扩展分片集群海量数据存储。2、多引擎支持各种强大的索引需求支持地理位置索引可用于构建各种O2O应用文本索引解决搜索的需求TTL索引解决历史数据过期的需求Gridfs解决文件存储的需求aggregation & mapreduce解决数据分析场景需求,可以自己写查询语句或脚本,将请求分发到 MongoDB 上完成。3、具体的应用场景传统的关系型数据库在解决三高问题上的力不从心。 何为三高?High performance - 对数据库高并发读写的需求。Huge Storage - 对海量数据的高效率存储和访问的需求。High Scalability && High Availability- 对数据库的高可扩展性和高可用性的需求。MongoDB可以完美解决三高问题。4、以下是几个实际的应用案例:(1)游戏场景使用MongoDB存储游戏用户信息、装备、积分等,直接以内嵌文档的形式存储,方便查询、更新。(2)物流场景使用MongoDB存储订单信息、订单状态、物流信息,订单状态在运送过程中飞速迭代、以MongoDB内嵌数组的形式来存储,一次查询就能将订单所有的变更查出来,牛逼plus。(3)社交场景使用MongoDB存储用户信息,朋友圈信息,通过地理位置索引实现附近的人、定位功能。(4)物联网场景使用MongoDB存储设备信息、设备汇报的日志信息、并对这些信息进行多维度分析。(5)视频直播使用MongoDB存储用户信息、点赞互动信息。5、选择MongoDB的场景总结:数据量大读写操作频繁数据价值较低,对事务要求不高五、MongoDB与MySQL关键字对比1、关键字对比MySQLMongoDB解释说明databasedatabase数据库tablecollection表/集合rowdocument行/文档columnfield字段/域indexindex索引join嵌入文档表关联/MongoDB不支持join,MongoDB通过嵌入式文档来替代多表连接primary keyprimary key主键/MongoDB自动将_id字段设置为主键2、集合相当于MySQL中的表集合就是一组文档。可以看作是具有动态模式的表。集合具有动态模式的特性。这意味着一个集合中的文档可以具有任意数量的不同形态。但是,将不同类型的文档存放在一个集合中会出现很多问题:文档中可以存放任意类型的变量,但是,这里不建议将不同类型的文档保存在同一个集合中,开发人员需要确保每个查询只返回特定模式的文档,或者确保执行查询的应用程序代码可以处理不同类型的文档;获取集合列表比提取集合中的文档类型列表要快得多,减少磁盘查找次数;相同类型的文档存放在同一个集合中可以实现数据的局部性,对于集合,让使用者见文知意;集合中只存放单一类型的文档,可以更高效地对集合进行索引;3、集合的命名集合名称中不能是空字符串;集合名称不能包含\0(空字符),因为这个字符用于表示一个集合名称的结束;集合名称不能以system.开头,该前缀是为内部集合保留的。集合名称不能有$,只能在某些特定情况下使用。通常情况下,可以认为这两个字符是MongoDB的保留字符,如果使用不当,那么驱动程序将无法正常工作。4、文档相当于MySQL中的行文档是MongoDB中的基本数据单元,相当于传统关系型数据库中的行,它是一组有序键值的集合。每个文档都有一个特殊的键“_id”,其在所属的集合中是唯一的。文档中的键是字符串类型。键中不能含有\0(空字符)。这个字符用于表示一个键的结束。 .和$是特殊字符,只能在某些特定情况下使用。通常情况下,可以认为这两个字符是MongoDB的保留字符,如果使用不当,那么驱动程序将无法正常工作。5、游标数据库会使用游标返回find的执行结果。游标的客户端实现通常能够在很大程度上对查询的最终输出进行控制。你可以限制结果的数量,跳过一些结果,按任意方向的任意键组合对结果进行排序,以及执行许多其他功能强大的操作。通过cursor.hasNext()检查是否还有其它结果,通过cursor.next()用来对其进行获取。调用find()时,shell并不会立即查询数据库,而是等到真正开始请求结果时才发送查询,这样可以在执行之前给查询附加额外的选项。cursor对象的大多数方法会返回游标本身,这样就可以按照任意顺序将选项链接起来了。在使用db.users.find();查询时,实际上查询并没有真正执行,只是在构造查询,执行cursor.hasNext(),查询才会发往服务器端。shell会立刻获取前100个结果或者前4MB的数据(两者之中的较小者),这样下次调用next或者hasNext时就不必再次连接服务器去获取结果了。在客户端遍历完第一组结果后,shell会再次连接数据库,使用getMore请求更多的结果。getMore请求包含一个游标的标识符,它会向数据库询问是否还有更多的结果,如果有则返回下一批结果。这个过程会一直持续,直到游标耗尽或者结果被全部返回。6、游标的生命周期在服务器端,游标会占用内存和资源。一旦游标遍历完结果之后,或者客户端发送一条消息要求终止,数据库就可以释放它正在使用的资源。何时销毁游标:当游标遍历完匹配的结果时,它会消除自身;当游标超出客户端的作用域时,驱动程序会向数据库发送一条特殊的消息,让数据库终止该游标;如果10分钟没有被使用的话,数据库游标也将自动销毁;六、下载与安装过程中一些常见的坑1、下载地址:https://www.mongodb.com/try/download/community22、配置环境变量D:\Program Files\MongoDB\Server\5.0\bin3、在bin目录下,重新打开一个窗口,D:\Program Files\MongoDB\Server\5.0\bin,打开cmd,输入MongoDB4、如果msi方式失败,可以下载zip文件进行安装。下载zip文件,解压,在bin同级目录下建data文件夹,在data下建一个db文件夹,存储MongoDB数据。在bin文件夹下执行cmd,执行mongod --dbpath D:\Program Files\mongodb\data\db命令;再在data目录下,建一个logs文件夹,存放MongoDB日志。在mongodb/bin目录下,建一个mongod.cfg文件,写入systemLog: destination: file logAppend: true path: D:\Program Files\mongodb\data\logs\mongod.log storage: dbPath: D:\Program Files\mongodb\data\db执行mongod --config "D:\Program Files\mongodb\bin\mongod.cfg" --install 命令,安装MongoDB。通过mongod --version检查MongoDB版本。D:\Program Files\mongodb\bin>mongod --version db version v5.0.14 Build Info: { "version": "5.0.14", "gitVersion": "1b3b0073a0b436a8a502b612f24fb2bd572772e5", "modules": [], "allocator": "tcmalloc", "environment": { "distmod": "windows", "distarch": "x86_64", "target_arch": "x86_64" } }5、mongodb由于目标计算机积极拒绝,无法连接突然间,mongodb无法连接了?mongod.exe --dbpath "D:\Program Files\mongodb\data完美解决。注意一点,在重新启动时,执行mongod.exe --dbpath "D:\Program Files\mongodb\data的窗口不要关闭。6、由于找不到vcruntime140_1.dll,无法继续执行代码1、下载vcruntime140_1.dll文件2、将vcruntime140_1.dll文件拷贝到C:\Windows\System32即可七、Java整合MongoDB,实现农民工增删改查1、加入POM<dependency> <groupId>org.mongodb</groupId> <artifactId>mongo-java-driver</artifactId> <version>3.8.2</version> </dependency>2、MongoDBUtil工具类package com.example.demo.utils; import java.util.ArrayList; import java.util.List; import java.util.Map; import org.bson.Document; import org.bson.conversions.Bson; import com.mongodb.MongoClient; import com.mongodb.MongoCredential; import com.mongodb.ServerAddress; import com.mongodb.client.FindIterable; import com.mongodb.client.MongoCollection; import com.mongodb.client.MongoCursor; import com.mongodb.client.MongoDatabase; import com.mongodb.client.model.Filters; public class MongoDBUtil { private static MongoClient mongoClient; private static MongoClient mongoClientIdentify; /** * 不通过认证获取连接数据库对象 */ public static MongoDatabase getNoIdentifyConnect(String host, int port, String dbaseName) { // 连接mongodb服务 MongoDBUtil.mongoClient = new MongoClient(host, port); // 连接数据库 MongoDatabase mongoDatabase = MongoDBUtil.mongoClient.getDatabase(dbaseName); // 返回连接数据库对象 return mongoDatabase; } /** * 通过连接认证获取MongoDB连接 */ public static MongoDatabase getIdentifyConnect(String host, int port, String dbaseName, String userName, String password) { List<ServerAddress> adds = new ArrayList<ServerAddress>(); ServerAddress serverAddress = new ServerAddress(host, port); adds.add(serverAddress); List<MongoCredential> credentials = new ArrayList<>(); MongoCredential mongoCredential = MongoCredential.createScramSha1Credential(userName, dbaseName, password.toCharArray()); credentials.add(mongoCredential); // 通过连接认证获取MongoDB连接 MongoDBUtil.mongoClientIdentify = new MongoClient(adds, credentials); MongoDatabase mongoDatabase = MongoDBUtil.mongoClientIdentify.getDatabase(dbaseName); return mongoDatabase; } /** * 关闭连接 */ public static void closeNoIdentifyConnect () { MongoDBUtil.mongoClient.close(); } /** * 关闭连接 */ public static void closeIdentifyConnect () { MongoDBUtil.mongoClientIdentify.close(); } /** * 插入一个文档 */ public static void insertOne (Map<String, Object> data, MongoDatabase mongoDatabase, String col) { //获取集合 MongoCollection<Document> collection = mongoDatabase.getCollection(col); //创建文档 Document document = new Document(); for (Map.Entry<String, Object> m : data.entrySet()) { document.append(m.getKey(), m.getValue()).append(m.getKey(), m.getValue()); } //插入一个文档 collection.insertOne(document); } /** * 插入多个文档 */ public static void insertMany (List<Map<String, Object>> listData, MongoDatabase mongoDatabase, String col) { //获取集合 MongoCollection<Document> collection = mongoDatabase.getCollection(col); //要插入的数据 List<Document> list = new ArrayList<>(); for (Map<String, Object> data : listData) { //创建文档 Document document = new Document(); for (Map.Entry<String, Object> m : data.entrySet()) { document.append(m.getKey(), m.getValue()); } list.add(document); } //插入多个文档 collection.insertMany(list); } /** * 删除匹配到的第一个文档 */ public static void delectOne (String col, String key, Object value, MongoDatabase mongoDatabase) { //获取集合 MongoCollection<Document> collection = mongoDatabase.getCollection(col); //申明删除条件 Bson filter = Filters.eq(key, value); //删除与筛选器匹配的单个文档 collection.deleteOne(filter); } /** * 删除匹配的所有文档 */ public static void deleteMany (String col, String key, Object value, MongoDatabase mongoDatabase) { //获取集合 MongoCollection<Document> collection = mongoDatabase.getCollection(col); //申明删除条件 Bson filter = Filters.eq(key, value); //删除与筛选器匹配的所有文档 collection.deleteMany(filter); } /** * 删除集合中所有文档 */ public static void deleteAllDocument(String col, MongoDatabase mongoDatabase) { //获取集合 MongoCollection<Document> collection = mongoDatabase.getCollection(col); collection.deleteMany(new Document()); } /** * 删除文档和集合。 */ public static void deleteAllCollection(String col, MongoDatabase mongoDatabase) { //获取集合 MongoCollection<Document> collection = mongoDatabase.getCollection(col); collection.drop(); } /** * 修改单个文档,修改过滤器筛选出的第一个文档 * * @param col 修改的集合 * @param key 修改条件的键 * @param value 修改条件的值 * @param eqKey 要修改的键,如果eqKey不存在,则新增记录 * @param eqValue 要修改的值 * @param mongoDatabase 连接数据库对象 */ public static void updateOne (String col, String key, Object value,String eqKey, Object eqValue, MongoDatabase mongoDatabase) { //获取集合 MongoCollection<Document> collection = mongoDatabase.getCollection(col); //修改过滤器 Bson filter = Filters.eq(key, value); //指定修改的更新文档 Document document = new Document("$set", new Document(eqKey, eqValue)); //修改单个文档 collection.updateOne(filter, document); } /** * 修改多个文档 * * @param col 修改的集合 * @param key 修改条件的键 * @param value 修改条件的值 * @param eqKey 要修改的键,如果eqKey不存在,则新增记录 * @param eqValue 要修改的值 * @param mongoDatabase 连接数据库对象 */ public static void updateMany (String col, String key, Object value, String eqKey, Object eqValue, MongoDatabase mongoDatabase) { //获取集合 MongoCollection<Document> collection = mongoDatabase.getCollection(col); //修改过滤器 Bson filter = Filters.eq(key, value); //指定修改的更新文档 Document document = new Document("$set", new Document(eqKey, eqValue)); //修改多个文档 collection.updateMany(filter, document); } /** * 查找集合中的所有文档 */ public static MongoCursor<Document> find (String col, MongoDatabase mongoDatabase) { //获取集合 MongoCollection<Document> collection = mongoDatabase.getCollection(col); //查找集合中的所有文档 FindIterable<Document> findIterable = collection.find(); MongoCursor<Document> cursorIterator = findIterable.iterator(); return cursorIterator; } /** * 按条件查找集合中文档 */ public static MongoCursor<Document> Filterfind (String col,String key, Object value, MongoDatabase mongoDatabase) { //获取集合 MongoCollection<Document> collection = mongoDatabase.getCollection(col); //指定查询过滤器 Bson filter = Filters.eq(key, value); //指定查询过滤器查询 FindIterable<Document> findIterable = collection.find(filter); MongoCursor<Document> cursorIterator = findIterable.iterator(); return cursorIterator; } }3、测试类<dependency> <groupId>junit</groupId> <artifactId>junit</artifactId> <version>4.12</version> </dependency>package com.example.demo.utils; import com.mongodb.client.MongoCursor; import com.mongodb.client.MongoDatabase; import org.bson.Document; import org.junit.Test; import java.util.ArrayList; import java.util.HashMap; import java.util.List; import java.util.Map; public class MongoDBTest { // 获取数据库连接对象 MongoDatabase mongoDatabase = MongoDBUtil.getNoIdentifyConnect("127.0.0.1", 27017, "test"); @Test public void insertOne() { Map<String, Object> map = new HashMap<String, Object>(); map.put("姓名", "哪吒编程"); map.put("性别", "男"); map.put("年龄", 18); MongoDBUtil.insertOne(map, mongoDatabase, "worker"); MongoDBUtil.closeNoIdentifyConnect(); } @Test public void insertMany() { Map<String, Object> map1 = new HashMap<String, Object>(); map1.put("姓名", "哪吒编程2"); map1.put("性别", "男"); map1.put("年龄", 18); Map<String, Object> map2 = new HashMap<String, Object>(); map2.put("姓名", "妲己"); map2.put("性别", "女"); map2.put("年龄", 18); List<Map<String, Object>> listData = new ArrayList<>(); listData.add(map1); listData.add(map2); MongoDBUtil.insertMany(listData, mongoDatabase, "worker"); MongoDBUtil.closeNoIdentifyConnect(); } @Test public void delectOne() { MongoDBUtil.delectOne("worker", "姓名", "妲己", mongoDatabase); MongoDBUtil.closeNoIdentifyConnect(); } @Test public void deleteMany() { MongoDBUtil.deleteMany("worker", "姓名", "哪吒编程", mongoDatabase); MongoDBUtil.deleteMany("worker", "姓名", "妲己", mongoDatabase); MongoDBUtil.closeNoIdentifyConnect(); } @Test public void deleteAllDocument() { MongoDBUtil.deleteAllDocument("worker", mongoDatabase); MongoDBUtil.closeNoIdentifyConnect(); } @Test public void deleteAllCollection() { MongoDBUtil.deleteAllCollection("worker", mongoDatabase); MongoDBUtil.closeNoIdentifyConnect(); } @Test public void updateOne() { MongoDBUtil.updateOne("worker", "姓名", "哪吒编程2","姓名", "哪吒编程", mongoDatabase); MongoDBUtil.closeNoIdentifyConnect(); } @Test public void updateMany() { MongoDBUtil.updateMany("worker", "姓名", "哪吒编程2","姓名", "哪吒编程", mongoDatabase); MongoDBUtil.closeNoIdentifyConnect(); } @Test public void find() { MongoCursor<Document> mongoCursor = MongoDBUtil.find("worker", mongoDatabase); while (mongoCursor.hasNext()) { Document document = mongoCursor.next(); System.out.println(document + " size: " + document.size()); } MongoDBUtil.closeNoIdentifyConnect(); } @Test public void filterfind() { MongoCursor<Document> mongoCursor = MongoDBUtil.Filterfind("worker", "姓名", "哪吒编程", mongoDatabase); while (mongoCursor.hasNext()) { Document document = mongoCursor.next(); System.out.println(document + " size: " + document.size()); } MongoDBUtil.closeNoIdentifyConnect(); } }

0
0
0
浏览量1026
无事小神仙

MongoDB数据库性能监控看这一篇就够了

最近项目在使用MongoDB作为图片和文档的存储数据库,为啥不直接存MySQL里,还要搭个MongoDB集群,麻不麻烦?让我们一起,一探究竟,继续学习MongoDB数据库性能监控,实现快速入门,丰富个人简历,提高面试level,给自己增加一点谈资,秒变面试小达人,BAT不是梦。一、MongoDB启动超慢1、启动日常卡住,根本不用为了截屏而快速操作,MongoDB启动真的超级慢2、启动MongoDB配置服务器,间歇性失败。3、查看MongoDB日志,分析“MongoDB启动慢”的原因。4、耗时“一小时”,MongoDB启动成功!二、原因分析在MongoDB关闭之前,有较大的索引建立的操作没有完成,MongoDB就直接shutdown了,等MongoDB再次启动的时候,MongoDB默认会将这个index重建好,重建期间处于startup状态。由于不清楚重建索引需要多久,因此可以通过重启mongod时加上–noIndexBuildRetry参数来跳过索引重建。等启动完成后,再创建这个索引。下面从几方面,监控一下MongoDB的性能问题。三、监控MongoDB内存使用情况常驻内存: 常驻内存是MongoDB在RAM中显式拥有的内存。如果查询一个集合数据,MongoDB会将其放入常驻内存中,MongoDB会获得其地址,这个地址不是RAM中数据的真实地址,而是一个虚拟地址。MongoDB可以将它传递给内核,内核会查找出数据的真实位置。如果内核需要从内存中清理缓存,MongoDB仍然可以通过该地址对其进行访问。MongoDB会向内核请求内存,然后内核会查看数据缓存,如果发现数据不存在,就会产生缺页错误并将数据复制到内存中,最后再返给MongoDB。虚拟内存: 操作系统提供的一种抽象,它对软件进程隐藏了物理存储的细节。每个进程都可以看到一个连续的内存地址空间。在Ops Manager中,MongoDB的虚拟内存是映射内存的两倍。映射内存: 包含MongoDB曾经访问过的所有数据。四、监控MongoDB磁盘空间当磁盘空间不足时,可以进行如下操作:可以添加一个分片;删除未使用的索引;可以执行压缩操作;关闭副本集成员,将其数据复制到更大的磁盘中挂载;用较大驱动器的成员替换副本集中的成员;五、MongoDB常用命令1、MongoDB获取系统信息db.hostInfo()2、MongoDB获取系统内存情况db.serverStatus().mem3、MongoDB获取连接数信息db.serverStatus().connections4、MongoDB获取全局锁信息db.serverStatus().globalLock5、MongoDB获取操作统计计数器db.serverStatus().opcounters6、MongoDB获取数据库状态信息db.stats()以上是MongoDB的重要指标,通过这些指标我们可以了解到MongoDB的运行状态,评估数据库的健康程度,并快速确定实际项目中遇到的性能瓶颈。比如项目中遇到的MongoSocketReadTimeoutException:com.mongodb.MongoSocketReadTimeoutException: Timeout while receiving message at com.mongodb.connection.InternalStreamConnection.translateReadException(InternalStreamConnection.java:475) at com.mongodb.connection.InternalStreamConnection.receiveMessage(InternalStreamConnection.java:226) at com.mongodb.connection.UsageTrackingInternalConnection.receiveMessage(UsageTrackingInternalConnection.java:105) at com.mongodb.connection.DefaultConnectionPool$PooledConnection.receiveMessage(DefaultConnectionPool.java:438) at com.mongodb.connection.CommandProtocol.execute(CommandProtocol.java:112) at com.mongodb.connection.DefaultServer$DefaultServerProtocolExecutor.execute(DefaultServer.java:168) at com.mongodb.connection.DefaultServerConnection.executeProtocol(DefaultServerConnection.java:289) at com.mongodb.connection.DefaultServerConnection.command(DefaultServerConnection.java:176) at com.mongodb.operation.CommandOperationHelper.executeWrappedCommandProtocol(CommandOperationHelper.java:216) at com.mongodb.operation.CommandOperationHelper.executeWrappedCommandProtocol(CommandOperationHelper.java:207) at com.mongodb.operation.CommandOperationHelper.executeWrappedCommandProtocol(CommandOperationHelper.java:113) at com.mongodb.operation.FindOperation$1.call(FindOperation.java:488) at com.mongodb.operation.FindOperation$1.call(FindOperation.java:1) at com.mongodb.operation.OperationHelper.withConnectionSource(OperationHelper.java:241) at com.mongodb.operation.OperationHelper.withConnection(OperationHelper.java:214) at com.mongodb.operation.FindOperation.execute(FindOperation.java:483) at com.mongodb.operation.FindOperation.execute(FindOperation.java:1) at com.mongodb.Mongo.execute(Mongo.java:818)六、MongoDB持久性1、复制延迟复制延迟是指从节点无法跟上主节点的速度。从节点一个操作的时间减去主节点此操作的时间,就是复制延迟。延迟应该尽可能的接近0,并且通常是毫秒级的。2、备份备份操作通常会将所有数据读入内存,因此,备份操作通常应该在副本集从节点而不是主节点进行,如果是单机MongoDB,则应该在空间时间进行备份,比如深夜凌晨。3、持久性持久性是数据库必备的一种特性,想象一下,如果数据库不具备持久性,如果数据库重启,数据全部丢失,太可怕了,不敢想。为了在服务器发生故障时提供持久性,MongoDB使用预写式日志机制,英文简称 WAL。WAL是数据库系统中一种常见的持久性技术。在数据存入数据库之前,将这些更改操作写到磁盘上。从MongoDB4.0开始,执行写操作时,MongoDB会使用与oplog相同的格式创建日志。oplog语句具有幂等性,不管执行多少次,结果都是一样的。MongoDB还维护了日志和数据库数据文件的内存视图。默认情况,每50毫秒会将日志条目刷新到磁盘上,每60秒会将数据库文件刷新到磁盘上。刷新数据的时间60秒间隔被称为检查点。日志用于将上一个检查点之后的数据提供持久性。MongoDB的持久性就是在发生故障时,重启之后,将日志中的语句重新执行一遍,以保证在关闭前丢失的数据重新刷新到MongoDB中。MongoDB会在data目录下创建一个journal的子目录,WiredTiger日志文件的名称为WiredTigerLog.<sequence>。sequence是一个从0 000 000 001开始的数字。MongoDB会对写入的日志进行压缩,日志文件限制的最大大小为100MB。如果大于100MB,MongoDB就会自动创建一个新的日志文件,由于日志文件只需在上次检查点之后恢复数据,因此在新的检查点写入完成时,旧的日志文件就会被删除。

0
0
0
浏览量896
无事小神仙

技术瓶颈?如何解决MongoDB超大块数据问题?

最近项目在使用MongoDB作为图片和文档的存储数据库,为啥不直接存MySQL里,还要搭个MongoDB集群,麻不麻烦?让我们一起,一探究竟,继续学习解决MongoDB超大块数据问题,实现快速入门,丰富个人简历,提高面试level,给自己增加一点谈资,秒变面试小达人,BAT不是梦。一、MongoDB服务器管理1、添加服务器可以在任何时间添加mongos进程,只要确保,它们的 --configdb选项指定了正确的配置服务器副本集,并且客户端可以立即与其建立连接。2、修改分片中的服务器要修改一个分片的成员,需要直接连接到该分片的主节点,并重新配置副本集。集群配置会检测到变更并自动更新 config.shards。3、删除分片一般情况下,不应该从集群中删除分片,会给系统带来不必要的压力。删除分片时,要确保均衡器的打开状态。均衡器的作用是把要删除分片上的所有数据移动到其它分片,这个过程称为排空。可以通过 removeShard命令执行排空操作。二、均衡器可以通过 sh.setBalancerState(false)关闭均衡器。关闭均衡器不会将正在进行的过程停止,也就是说迁移过程不会立即停止。通过db.locks.find({"_id","balancer"})["state"]查看均衡器是否关闭。0表示均衡器已关闭。均衡过程会增加系统的负载,目标分片必须查询源分片的所有文档,并将文档插入目标分片的块中,然后源分片必须删除这些文档。数据迁移是很消耗性能的,此时可以在config.settings集合中为均衡过程指定一个时间窗口。将其指定在一个闲暇时间执行。如果设置了均衡窗口,应该对其进行监控,确保mongos能够在所分配的时间内保持集群的均衡。均衡器使用块的数量而不是数据的大小作为度量。移动一个块被称为迁移,这是MongoDB平衡数据的方式。可能会存在一个大块的分片称为许多小分片迁移的目标。三、修改块的大小一个块可以存放数百万个文档,块越大,迁移到另一个分片所花费的时间就越长,默认情况下,块的大小为64MB。但对于64MB的块,迁移时间太长了,为了加快迁移速度,可以减少块的大小。比如将块的大小改为32MB。db.settings.save({"_id","chunksize","value":32})已经存在的块不会发生改变,自动拆分仅会在插入或更新时发生,拆分操作是无法恢复的,如果增加了块的大小,那么已经存在的块只会通过插入或更新来增长,直到它们达到新的大小。块大小的取值范围在1MB到1024MB。这是一个集群范围的设置,会影响所有的集合和数据库。因此,如果一个集合需要较小的块,另一个集合需要较大的块,那么可能需要在这两个大小间取一个折中的值。如果MongoDB的迁移过于频繁或者使用的文档太大,则可能需要增加块的大小。四、超大块一个块的所有数据都位于某个特定的分片上。如果最终这个分片拥有的块比其它分片多,那么MongoDB会将一些块移动到其它分片上。当一个块大于 config.settings中所设置的最大块大小时,均衡器就不允许移动这个块了。这些不可拆分、不可移动的块被称为超大块。1、分发超大块要解决超大块引起的集群不均衡问题,就必须将超大块均匀地分配到各个分片中。2、分发超大块步骤:关闭均衡器 sh.setBalancerState(false);因为MongoDB不允许移动超过最大块大小的块,所以要暂时先增大块大小,使其超过现有的最大块块大小。记录下当时的块大小。db.settings.save({"_id","chunksize","value":maxInteger});使用moveChunk命令移动分片中的超大块;在源分片剩余的块上运行splitChunk命令,直到其块数量与目标分片块数量大致相同;将块大小设置为其最初值;开启均衡器3、避免出现超大块更改片键,使其拥有更细粒度的分片。通过db.currentOp()查看当前操作,``db.currentOp()```最常见的用途是查找慢操作。MongoDB Enterprise > db.currentOp() { "inprog" : [ { "type" : "op", "host" : "LAPTOP-P6QEH9UD:27017", "desc" : "conn1", "connectionId" : 1, "client" : "127.0.0.1:50481", "appName" : "MongoDB Shell", "clientMetadata" : { "application" : { "name" : "MongoDB Shell" }, "driver" : { "name" : "MongoDB Internal Client", "version" : "5.0.14" }, "os" : { "type" : "Windows", "name" : "Microsoft Windows 10", "architecture" : "x86_64", "version" : "10.0 (build 19044)" } }, "active" : true, "currentOpTime" : "2023-02-07T23:12:23.086+08:00", "threaded" : true, "opid" : 422, "lsid" : { "id" : UUID("f83e33d1-9966-44a4-87de-817de0d804a3"), "uid" : BinData(0,"47DEQpj8HBSa+/TImW+5JCeuQeRkm5NMpJWZG3hSuFU=") }, "secs_running" : NumberLong(0), "microsecs_running" : NumberLong(182), "op" : "command", "ns" : "admin.$cmd.aggregate", "command" : { "aggregate" : 1, "pipeline" : [ { "$currentOp" : { "allUsers" : true, "idleConnections" : false, "truncateOps" : false } }, { "$match" : { } } ], "cursor" : { }, "lsid" : { "id" : UUID("f83e33d1-9966-44a4-87de-817de0d804a3") }, "$readPreference" : { "mode" : "primaryPreferred" }, "$db" : "admin" }, "numYields" : 0, "locks" : { }, "waitingForLock" : false, "lockStats" : { }, "waitingForFlowControl" : false, "flowControlStats" : { } }, { "type" : "op", "host" : "LAPTOP-P6QEH9UD:27017", "desc" : "Checkpointer", "active" : true, "currentOpTime" : "2023-02-07T23:12:23.086+08:00", "opid" : 3, "op" : "none", "ns" : "", "command" : { }, "numYields" : 0, "locks" : { }, "waitingForLock" : false, "lockStats" : { }, "waitingForFlowControl" : false, "flowControlStats" : { } }, { "type" : "op", "host" : "LAPTOP-P6QEH9UD:27017", "desc" : "JournalFlusher", "active" : true, "currentOpTime" : "2023-02-07T23:12:23.086+08:00", "opid" : 419, "op" : "none", "ns" : "", "command" : { }, "numYields" : 0, "locks" : { }, "waitingForLock" : false, "lockStats" : { }, "waitingForFlowControl" : false, "flowControlStats" : { } } ], "ok" : 1 }4、输出内容详解:opid,操作的唯一标识,可以使用这个字段来终止操作;active,操作是否正在进行,如果为false,意味着此操作已经让出或者正在等待其它操作交出锁;secs_running,操作的持续时间,可以使用这个字段查询耗时过长的操作;op,操作类型,通常为query、insert、update、remove;desc,客户端的标识符,可以与日志中的消息相关联;locks,描述操作所涉及的锁类型;waitingForLock,当前操作是否处于阻塞中并等待获取锁;numYields,操作释放锁以允许其它操作进行的次数。一个操作只有在其它操作进入队列并等待获取它的锁时才会让出自己的锁,如果没有操作处于waitingForLock状态,则当前操作不会让出锁;lockStats.timeAcquiringMiros,操作为了获取锁所花费的时间;通过``db.currentOp()找到慢查询后,可以通过db.killOp(opid)```的方式将其终止。并不是所有操作都可以被终止,只有当操作让出时,才能终止,因此,更新、查找、删除操作都可以被终止,但持有或等待锁的操作不能被终止。如果MongoDB中的请求发生了堆积,那么这些写操作将堆积在操作系统的套接字缓冲区,当终止MongoDB正在运行的写操作时,MongoDB依旧会处理缓冲区的写操作。可以通过开启写入确认机制,保证每次写操作都要等前一个写操作完成后才能执行,而不是仅仅等到前一个写操作处于数据库服务器的缓冲区就开始下一次写入。五、系统分析器系统分析器可以提供大量关于耗时过长操作的信息,但系统分析器会严重的降低MongoDB的效率,因为每次写操作都会将其记录在system.profile中记录一下。每次读操作都必须等待system.profile写入完毕才行。开启分析器:MongoDB Enterprise > db.setProfilingLevel(2) { "was" : 0, "slowms" : 100, "sampleRate" : 1, "ok" : 1 }slowms决定了在日志中打印慢速操作的阈值。比如slowms设置为100,那么每个耗时超过100毫秒的操作都会被记录在日志中,即使分析器是关闭的。查询分析级别:MongoDB Enterprise > db.getProfilingLevel() 2重新启动MongoDB数据库会重置分析级别。六、一些常见的辅助命令通过Object.bsonsize函数获取其在磁盘中存储大小,单位是字节。> Object.bsonsize(db.worker.find()) 65194使用mongotop统计哪些集合最繁忙。使用mongotop --locks统计每个数据库的锁信息。mongostat提供了整个服务器范围的信息。

0
0
0
浏览量605
无事小神仙

一次线上事故,我顿悟了MongoDB的精髓

最近项目在使用MongoDB作为图片和文档的存储数据库,为啥不直接存MySQL里,还要搭个MongoDB集群,麻不麻烦?让我们一起,一探究竟,继续学习MongoDB分片的理论与实践,实现快速入门,丰富个人简历,提高面试level,给自己增加一点谈资,秒变面试小达人,BAT不是梦。三分钟你将学会:一次MongoDB线上事故的快速解决什么是MongoDB分片?MongoDB如何分片?何时分片?搭建MongoDB分片服务器MongoDB如何追踪分片集群数据?MongoDB拒绝连接?显然是MongoDB服务又挂了。连接MongoDB服务器,一探究竟。通过ps -aef|grep mongo查看mongo服务是否还在?不出所料,都不在了。大概率是因为磁盘满了。df -TH查看磁盘空间。磁盘100%如何解决?cd到log目录下,通过rm -rf *删除所有日志,再重启MongoDB。mongodb启动异常:about to fork child process, waiting until server is ready for connection由于MongoDB是集群部署的,启动时,会进行数据同步,可能会比较耗时,性子急的我,怎么能忍,直接Ctrl C,强制停止,然后再重新启动。通过ps -aef|grep mongo查看一下进程,两个一样的进程赫然在列。通过ps -aef|grep mongo | grep -v grep | awk '{print $2}' | xargs kill -9强制停止所有mongo进程。将data目录下的 mongod.lock 和 diagnostic.data文件删掉,再重启MongoDB,启动脚本mongos_start.sh(mongod --config data/mongodb.conf),完美解决。MongoDB服务器的部署目录中都是什么含义呢?它们之间又有什么关系呢?下面简单介绍一下MongoDB的分片。一、什么是MongoDB分片?分片是指跨机器拆分数据的过程,也可以叫做分区。MongoDB支持手动分区,使用这种方法,应用程序会维护到多个不同数据库服务器端的连接,每个服务器端都是完全独立的。应用程序不仅管理不同服务器上不同数据的存储,还管理在适当的服务器上查询数据。但当从集群中添加或删除节点,或者面对数据分布或负载模式的变化时,难以维护。MongoDB支持自动分片,这种方式试图将数据库架构从应用程序中抽离出来,并简化系统管理。MongoDB自动均衡分片上的数据,使节点的添加和删除变得更容易。MongoDB的分片机制允许你创建一个由许多分片组成的集群,并将集合中的数据分散在集群中,在每个分片上放置数据的一个子集。这允许应用程序超出单机服务器或副本集的资源限制。分片组成的集群对应用程序来说就像一台单机服务器,分片前运行一个或多个称为mongos的路由进程,mongos维护着一个“目录”,指明了每个分片包含哪些数据。应用程序可以正常连接到此路由服务器并发出请求。路由服务器知道哪些数据在哪个分片上,可以将请求转发到适当的分片。如果有对请求的响应,理由服务器会收集它们,并将它们合并,然后再返回给应用程序,对应用程序而言,它只知道自己连接到了一个单独的mongod。二、MongoDB如何分片?在单台机器上快速建立一个集群。首先,使用--nodb和--norc选项启动mongo shell:mongo --nodb --norc。使用ShardingTest类创建集群。运行如下代码:st = ShardingTest({ name:"one-min-shards", chunkSize:1, shards:2, rs:{ nodes:3, oplogSize:10 }, other:{ enableBalancer:true } });name:分片集群的标签;shards:制定了集群由两个分片组成;rs:将每个分片定义为一组3个节点的副本集;enableBalancer:在集群启动后启用均衡器;ShardingTest是为了支持服务器端测试套件设计的,它在保持尽可能低的资源占用以及建立体系结构相对复杂的分片集群方面,提供了很多便利。当运行ShardingTest后,它会创建一个包含两个分片的集群,每个分片都是一个副本集。同时会对副本集进行配置,并使用必要的选项启动每个节点以建立复制协议。它会启动一个mongos来管理跨分片的请求,这样客户端就可以像与一个独立的mongod通信一样与集群进行交互。最后,它会为用于维护理由表信心的配置服务器启动一个额外的副本集,以确保查询被定向到正确的分片。分片的主要使用场景是拆分数据集以解决硬件和成本的限制,或为应用程序提供更好的性能。当ShardingTest完成集群设置后,将启动并运行10个进程,你可以连接到这些进程:两个副本集(各有3个节点)、一个配置服务器副本集(3个节点),以及一个mongos。默认情况下,这些进程会从20000端口开始。mongos会运行在20009端口上。三、何时分片?通常情况下,分片用于:增加可用RAM;增加可用磁盘空间;减少服务器的负载;处理单个MongoDB无法承受的吞吐量;四、搭建MongoDB分片服务器1、配置服务器 config进程配置服务器是集群的大脑,保存着关于每个服务器包含哪些数据的所有元数据,因此必须首先创建配置服务器。配置服务器非常重要,运行时必须启动日志功能,并确保它的数据存储在非临时性驱动器上。配置服务器必须在任何一个mongos进程之前通过mongod -f config.conf启动,因为mongos需要从配置服务器中提取配置信息。当对配置服务器进行写入时,MongoDB会使用“majority” 的 writeConcern级别; 当对配置服务器进行读取时,MongoDB会使用“majority” 的 readConcern级别;这确保了分片集群元数据在不发生回滚的情况下才会被提交到配置服务器副本集。它还确保了只有那些不受配置服务器故障影响的元数据才能被读取。这可以确保所有mongos路由节点对分片集群中的数据组织方式具有一致性。在服务器资源方面,配置服务器应该具有充分的网络和CPU资源,配置服务器只保存了集群中数据的目录,因此只需要很少的硬盘存储资源。由于配置服务器的重要性,在进行任何集群维护前,都应该先对配置服务器的数据进行备份。2、mongos进程mongos 是路由服务器,供应用程序连接使用。通过mongod -f config.conf启动路由服务器,mongos进程需要知道配置服务器的地址,因此需要在config.conf中配置 configdb=configReplSet/配置服务器的三个地址,通过配置logpath,保存MongoDB的日志。应该启动一定数量的mongos进程,并尽可能将其放在靠近所有分片的位置,这样可以提高查询性能。3、将副本集转换为分片在依次启动配置服务器、路由服务器后,可以添加分片了,如果之前已经存在副本集,那么这个副本集就会成为第一个分片。从MongoDB 3.4 开始,对于分片集群,分片的mongod实例必须配置 --shardsvr 选项,也就是在config.conf中添加shardsvr=true,将副本集转换为分片的过程中,需要对副本集的每个成员都重复以上动作。将副本集作为分片添加到集群后,就可以将应用程序的连接从副本集改为mongos路由服务器了,并通过设置防火墙,切断应用程序与分片的直接连接。4、数据分片(1)如何数据分片假如有一个test数据库,并在name键上对worker集合进行分片。先对数据库进行分片,> sh.enableSharding("test");再对集合进行分片,sh.shardCollection("test.worker",{"name":1});如果worker集合已经存在,则必须在name字段上有索引,否则,shardCollection会返回错误。如果分片的集合不存在,mongos会自动在name片键上创建索引。shardCollection命令会将集合拆分成多个数据块,MongoDB会在集群中的分片间均匀的分散集合中的数据。五、MongoDB如何追踪集群数据?1、数据块因为MongoDB的数据量巨大,MongoDB一般会将文档以数据块的形式进行分组,这些数据块是片键指定范围内的文档,MongoDB一般会用一个较小的表来维护数据块与分片之间的映射关系。需要注意:块与块之间不能重叠;一个块中的文档数量过大时,会自动拆分成两个文档;一个文档总是属于且仅属于一个块;2、块范围新分片的集合中只有一个块,块的边界是负无穷到正无穷;随着块的增长,MongoDB会自动将其拆分成两块,范围从负无穷到value,value到正无穷。范围较小的块包含比value小的值,范围较大的块包含value和比value大的值;因此,mongos可以很容易的找到文档在哪个块。3、拆分块各个分片的主节点mongod进程会跟踪它们当前的块,一旦达到某个阈值,就会检查该块是否需要拆分,如果需要拆分,mongod就会从配置服务器请求全局块大小配置值,然后执行块拆分并更新配置服务器上的元数据。配置服务器会创建新的块文档,并修改旧块的范围。当客户端写入一个块时,mongod会检查该块的拆分阈值。如果已经达到了拆分阈值,mongod就会向均衡器发送一个请求,将最顶部的块进行迁移,否则该块会留在此分片上。因为具有相同片键的两个文档一定会处于相同的块中,所以只能在片键值不同的文档之间进行拆分。下面文档如果以readTime分片,是可以的。但是,如果我读书读的比较快,所有书籍在一个月的时间里都读完了,readTime就会是一样的了,那就无法分片了。因此拥有不同的片键值在分片时,显得尤其重要。{"name":"哪吒编程","book":"Java核心技术","readTime":"October"} {"name":"哪吒编程","book":"Java编程思想","readTime":"October"} {"name":"哪吒编程","book":"深入理解Java虚拟机","readTime":"October"} {"name":"哪吒编程","book":"effective java","readTime":"November"} {"name":"哪吒编程","book":"重构 改善既有代码的设计","readTime":"November"} {"name":"哪吒编程","book":"高性能MySQL","readTime":"December"} {"name":"哪吒编程","book":"Spring技术内幕","readTime":"December"} {"name":"哪吒编程","book":"重学Java设计模式","readTime":"December"} {"name":"哪吒编程","book":"深入理解高并发编程","readTime":"January"} {"name":"哪吒编程","book":"Redis设计与实现","readTime":"January"}分片的前提条件是所有的配置服务器必须启动并可以访问。如果mongod不断接到对一个块的写请求,则它会持续尝试拆分该块并失败,而这些拆分尝试会拖慢mongod。mongod反复尝试分片却无法成功分片的过程被称为拆分风暴。六、均衡器均衡器负责数据的迁移。均衡器会定期检查分片之间是否存在不均衡,如果存在,就会对块进行迁移。在MongoDB 3.4 以上的版本上,均衡器位于配置服务器副本集的主节点成员上。均衡器是配置服务器副本集主节点上的后台进程,它会监视每个分片上的块数量。只有当一个分片上的块数量达到特定迁移阈值时,均衡器才会被激活。

0
0
0
浏览量518
无事小神仙

自从学习了MongoDB高可用,慢慢的喜欢上了它,之前确实冷落了

最近项目在使用MongoDB作为图片和文档的存储数据库,为啥不直接存MySQL里,还要搭个MongoDB集群,麻不麻烦?让我们一起,一探究竟,继续学习MongoDB高可用和片键策略,实现快速入门,丰富个人简历,提高面试level,给自己增加一点谈资,秒变面试小达人,BAT不是梦。一、复制在MongoDB中,创建副本集后就可以使用复制功能了,副本集是一组服务器,其中一个用于处理写操作的主节点primary,还有多个用于保存主节点数据副本的从节点secondary。如果主节点崩溃了,则从节点会选取出一个新的主节点。如果使用复制功能时有一台服务器停止运行了,那么仍然可以从副本集中的其它服务器访问数据。如果服务器上的数据已损坏或无法访问,则可以从副本集中的其它成员中创建一个新的数据副本。副本集中的每个成员都必须能够连接到其它成员,如果收到有关成员无法访问到其它成员,则可能需要更改网络配置以允许它们之间的连接。二、如何进行选举当一个从节点无法与主节点连通时,它就会联系并请求其它的副本集成员将自己选举为主节点。其它成员会做几项健全性检查:它们能否连接到主节点,而这个主节点是发起选举的节点无法连接到的?这个发起选举的从节点是否有最新数据?有没有其它更高优先级的成员可以被选举为主节点?MongoDB在3.2版本中引入了第1版复制协议。这是一个类PAFT的协议,并且包含了一些特定于MongoDB的副本集概念,比如仲裁节点、优先级、非选举成员、写入关注点等。还提出了很多新概念,比如更短的故障转移时间,大大减少了检测主节点失效的时间,它还通过使用term ID来防止重复投票。RAFT是一种共识算法,它被分解成了相对独立的子问题。共识是指多台服务器或进程在一些值上达成一致的过程。RAFT确保了一致性,使得同一序列的命令产生相同序列的结果,并在所部署的各个成员中达到相同序列的状态。副本集成员相互间每隔两秒发送一次心跳。如果某个成员在10秒内没有反馈心跳,则其它成员会将不良成员标记为无法访问。选举算法将尽最大努力尝试让具有最高优先权的从节点发起选举。成员优先权会影响选举的时机和结果。优先级高的从节点要比优先级低的从节点更快发起选举,而且也更有可能成为主节点。然而,低优先级的从节点也是有可能被短暂的选举为主节点的,副本集成员会继续发起选举直到可用的最高优先级成员被选举为主节点。被选举为主节点的从节点必须拥有最新的复制数据。三、优先级优先级用于表示一个成员称为主节点的优先程度,取值范围是0 ~ 100。数值越大,优先级越高。默认为1,如果将priority设置为0,表示此节点永远无法成为主节点,这样的成员还有一个名字~被动成员。四、选举仲裁者大多数小型项目,MongoDB只有两个副本集,为了参与选举,MongoDB支持一种特殊类型的成员,称为仲裁者,其唯一作用就是参与仲裁。仲裁者不参与存储数据,也不会为程序提供服务,它只是为了帮助只有两个副本集的集群选举主节点(为了满足大多数),需要注意的是,只能有一个仲裁者。仲裁者的缺点:假设有一个主节点,两个从节点,一个仲裁者。如果一个从节点停止运行了,那么就需要一个新的从节点,并且将主节点的数据复制到新的从节点,复制数据会父服务器造成很大的压力,降低程序运行速度。所以,尽可能使用奇数的从节点,而不是使用仲裁者。五、同步MongoDB通过保存操作日志oplog使多台服务器间保持相同的数据,oplog中保存着主节点执行的每一次写操作。oplog存在于主节点local数据库中的一个固定集合中,从节点通过查询此集合以获取需要复制的操作。每个从节点同样维护着自己的oplog,用来记录它从主节点复制的每个操作。这使得每个成员都可以被用作其他成员的同步源。如果应用某个操作失败,则从节点会停止从当前数据源复制数据。如果一个从节点由于某种原因停止工作了,它重新启动后,会从oplog中的最后一个操作开始同步。由于这些操作是先应用到数据上然后再写入oplog,因此从节点可能会重复已经应用到数据上的操作。MongoDB在设计时考虑了这点,oplog中的操作执行一次和多次,效果都是一样的,oplog中的每个操作都是幂等的。六、处理过时数据如果某个从节点的数据远远落后于同步源当前的操作,那么这个从节点就是过时的。过时的从节点无法赶上同步源,如果继续同步,从节点就需要跳过一些操作。此时,需要从其它节点进行复制,看看其它成员是否有更长的oplog以继续同步。如果都没有,该节点当前的复制操作将停止,需要进行完全同步或从最近的备份中恢复。为了避免出现不同步的节点,让主节点拥有比较大的oplog以保存足够多的操作日志。七、哈希片键为了尽可能快地加载数据,哈希片键是最好的选择。哈希片键可以使任何字段随机分发。如果打算在大量查询中使用升序键,但又想在写操作时随机分发,哈希片键是不错的选择,不过需要注意的是,哈希片键无法执行指定目标的范围查询。创建哈希片键:db.users.createIndex({"name":"hashed"})有一点需要注意,哈希片键的字段,不能是数组。Error: hashed indexes do not currently support array values.八、多热点单独的mongod服务器在执行升序写操作时效率最高,这与分片相冲突,当写操作分发在集群中时分片效率最高。每个分片上都有几个热点,便于写操作在集群中均匀分发。可以使用复合片键实现均匀分发,复合片键的第一个值可以是一个基数较小的值,片键的第二部分是一个升序值,这意味着在块的内部,值总是在增加的。九、分片规则1、分片的限制比如上图的异常,片键不能是数组,大多数特殊类型的索引不能用作片键。特别是,不能在地理空间索引上进行分片。2、片键的基数片键与索引类似,在基数高的字段上进行分片,性能会更好。如果有一个status键,只有“正常”、"异常"、“错误”几个值,MongoDB是无法将数据拆分成3个以上的块(因为目前只有三个值),如果想将一个取值较小的键作为片键,那么可以将其与另一个拥有多值的键组成复合片键,比如createTime字段。这样复合片键就拥有了较高的基数。十、控制数据分发1、自动分片MongoDB将集合均匀分发在集群中的每个分片上,如果存储的是同构数据,那么这种方式非常高效。如果有一个日志集合,价值不是很大,你可能不希望它存储在性能最好的服务器上,性能最好的服务器一般会存储重要的实时数据,而不允许其它集合使用它。可以通过sh.addShardToZone("shard0","hign")、sh.addShardToZone("shard1","low")、sh.addShardToZone("shard2","low")实现它。可以将不同的集合分配给不同的分片,比如,对及其重要的实时集合执行:sh.updateZoneKeyRange("super.important",{"<shardKey>":MinKey},...{"<shardKey>":MaxKey},"high")这条命令指的是:对于这个集合super.important,将片键从负无穷到正无穷的数据保存在标记为“high”的分片上。这不会影响其它集合的均匀分发。同样可以通过low,将不重要的日志集合存放在性能较差的服务器上。sh.updateZoneKeyRange("super.logs",{"<shardKey>":MinKey},...{"<shardKey>":MaxKey},"low")此时,日志集合就会均匀的分发到shard1和shard2上。同样,可以通过removeShardFromZone()从区域中删除分片。sh.removeShardFromZone("super.logs",{"<shardKey>":MinKey},...{"<shardKey>":MaxKey})2、手动分发可以通过关闭均衡器 sh.stopBalancer()启动手动分发。如果当前正在进行迁移,则此设置在迁移完成之前不会生效。一旦正在运行的迁移完成,均衡器就会停止移动数据。除非遇到特殊情况,否则,MongoDB应该使用自动分片,而不是手动分片。

0
0
0
浏览量497
无事小神仙

MongoDB 4.0支持事务了,还有多少人想用MySQL呢?

最近项目在使用MongoDB作为图片和文档的存储数据库,为啥不直接存MySQL里,还要搭个MongoDB集群,麻不麻烦?让我们一起,一探究竟,继续学习MongoDB的事务、连接池以及聚合框架,实现快速入门,丰富个人简历,提高面试level,给自己增加一点谈资,秒变面试小达人,BAT不是梦。一、MongoDB 不支持事务?一些第三方文章将 MongoDB 描述成 BASE 数据库。BASE 是指“基本可用、软状态、最终一致”。但这不是真的,从来都不是!MongoDB 从来都不是“最终一致”的。对主文档的读写是强一致性的,对单个文档的更新始终是原子的。软状态是指需要持续不断的更新数据,否则数据就会过期,但 MongoDB 并非如此。最后,如果太多的节点不可用,无法达成仲裁,MongoDB 将进入只读状态(降低可用性)。这是有意这么设计的,因为这样可以确保在出现问题时保持一致性。MongoDB 是一个 ACID 数据库。它支持原子性、一致性、隔离性和持久性。对单个文档的更新始终是原子的,从 4.0 版本开始,MongoDB 也支持跨多个文档和集合的事务。从 4.2 开始,甚至支持分片集群的跨分片事务。虽然 MongoDB 支持事务,但在使用它时仍然要谨慎。事务是以性能为代价的,而且由于 MongoDB 支持丰富的分层文档,如果你的模式设计正确,就没有必要经常跨多个文档更新数据。二、什么是事务?事务是数据库中处理的逻辑单元,包括一个或多个数据库操作,既可以是读操作,也可以是写操作,MongoDB支持跨个多操作、集合、数据库、文档和分片的ACID事务。事务的关键:它要么都成功,要么都失败。三、ACID的定义ACID是一个事务所需要具备的一组属性集合。ACID是原子性atomicity、一致性consistency、隔离性isolation、持久性durability的缩写。ACID事务可以确保数据和数据库状态的有效性,即使在出现断电或其它错误的情况下也是如此。原子性确保了事务中的所有操作要么都被执行、要么都不被执行。一致性确保可如果事务成功,那么数据库将从一个一致性状态转移到下一个一致性状态。隔离性是允许多个事务同时在数据库中运行的属性。它保证了一个事务不会查看到任何其它事务的部分结果,这意味着多个事务并行运行于依次运行每个事务所获得的结果都相同。持久性确保了在提交事务时,即使系统发生故障,所有数据也都会保持持久化。当数据库满足所有这些属性并且只有成功的事务才会被处理时,它就被称为符合ACID的数据库。如果在事务完成之前发生故障,ACID确保不会更改任何数据。MongoDB是一个分布式数据库,它支持跨副本集和跨分片的ACID事务。网络层增加了额外的复杂性。四、如何使用事务MongoDB提供了两种API来使用事务。第一种与关系型数据库类似(如start_transaction和commit_transaction),称为核心API;第二种称为回调API,一般推荐使用这种;核心API不会为大多数错误提供重试逻辑,它要求开发人员为操作、事务提交函数以及所需的任何重试和错误逻辑手动编写代码。与核心API不同,回调API提供了一个简单的函数,该函数封装了大量的功能,包括启动与指定逻辑会话关联的事务、执行作为回调函数提供的函数以及提交事务。回调API还提供了处理提交错误的重试逻辑。在MongoDB4.2中添加回调API是为了简化使用事务的应用程序开发,也便于添加处理事务错误的应用程序重试逻辑。核心API和回调API的比较核心API回调API需要显示调用才能启动和提交事务启动事务、执行指定操作,然后提交(可在发生错误前终止)不包含TransientTransactionError和UnknowTransactionCommitResult的错误处理逻辑,而是提供了为这些错误进行自定义处理的灵活性自动为TransientTransactionError和UnknowTransactionCommitResult提供错误处理逻辑要求为特定事务将显式的逻辑会话传递给API要求为特定事务将显式的逻辑会话传递给API五、重要参数简介在MongoDB事务中有两种限制,第一种是时间,控制事务的运行时间、事务等待获取锁的时间以及所有事务运行的最长时间;第二种是MongoDB的oplog条目和单个条目大大小限制;1、时间限制事务的默认最大运行时间是1分钟。可以通过修改transactionLifetimeLimitSeconds的限制来增加。对于分片集群,必须在所有分片副本集成员上设置该参数。超过此时间后,事务将被视为已过期,并由定期运行的清理进程终止。清理进程每60秒或transactionLifetimeLimitSeconds/2运行一次,以较小的值为准。要显式设置事务的时间限制,建议在提交事务时指定maxTimeMS参数。如果maxTimeMS没有设置,那么将使用transactionLifetimeLimitSeconds;如果设置了maxTimeMS,但这个值超过了transactionLifetimeLimitSeconds,那么还是会使用transactionLifetimeLimitSeconds。事务等待获取其操作所需锁的默认最大时间是 5 毫秒。可以通过修改maxTransactionLockRequestTimeoutMillis参数来控制。如果事务在此期间无法获取锁,则该事务会被终止。maxTransactionLockRequestTimeoutMillis可以被设置为0、-1或大于0的数字。设置为0,表示如果事务无法立即获得所需的所有锁,则该事务会被终止;设置为-1,将使用由maxTimeMS参数所指定的超时时间;设置为大于0的其它数字,将等待时间配置为该时间,尝试获取锁的等待时间也是该时间,单位秒;2、oplog大小限制MongoDB会创建出与事务中写操作数量相同的oplog数目。但是,每个oplog条目必须在16MB的BSON文档大小限制之内。六、连接池 = 数据库连接的缓存在最开始接触MongoDB的时候,是通过 MongoDatabase database = new MongoClient("localhost", 27017).getDatabase("test");的方式连接MongoDB。它会为每个请求创建一个新的连接,然后销毁,一般数据库的连接都是TCP连接,TCP是长连接,如果不断开,就会一直连着。众所周知,新建一个数据库连接的代价是很大的,复用现有连接才是首选,连接池就是干这个的。因此当需要新的连接时,就可以复用连接池中缓存的连接了。如果使用得当,连接池可以最大程度的降低数据库的新连接数量、创建频率。可以通过Mongo.get方法获得DB对象,表示MongoDB数据库的一个连接。默认情况下,当执行完数据库的查询操作后,连接将自动回到连接池中,通过api中的finally方法,将连接归还给连接池,不需要手动调用。1、MongoDB查询数据五步走MongoDB Client需要找到可用的MongoDB;Server MongoDB Client需要和 MongoDB Server建立 Connection;应用程序处理线程从 Connection Pool中获取 Connection;数据传输(获取连接后,进行 Socket 通信,获取数据);断开 Collection;2、MongoDB连接池的参数配置#线程池允许的最大连接数 connectionsPerHost: 40 #线程池中连接的最大空闲时间 threadsAllowedToBlockForConnectionMultiplier: 20 #1、MongoDB Client需要找到可用的MongoDB Server所需要的等待时间 serverSelectionTimeout: 40000 #2、MongoDB Client需要和MongoDB Server建立(new)Connection connectTimeout: 60000 #3、应用程序处理线程从Connection Pool中获取Connection maxWaitTime: 120000 #自动重连 autoConnectRetry: true #socket是否保活 socketKeepAlive: true #4、数据传输(获取连接后,进行Socket通信,获取数据) socketTimeout: 30000 slaveOk: true dbName: ngo #是否进行权限验证 auth: false #用户名 username: ngo #密码 password: 12345678七、聚合框架聚合框架是MongoDB中的一组分析工具,可以对一个或多个集合中的文档进行分析。聚合框架基于管道的概念,使用聚合管道可以从MongoDB集合获取输入,并将该集合中的文档传递到一个或多个阶段,每个阶段对输入执行不同的操作。每个阶段都将之前阶段输出的内容作为输入。所有阶段的输入和输出都是文档,可以称为文档流。每个阶段都会提供一组按钮或可调参数,可以通过控制它们来设置该阶段的参数,以执行各种任务。这些可调参数通常采用运算符的形式,可以使用这些运算符来修改字段、执行算术运算、调整文档形状、执行各种累加任务或其它各种操作。常见的聚合管道包括匹配match、投射project、排序sort、跳过skip、限制limit。八、MongoDB文档格式设计文档中表示数据的方式,在进行文档格式设计时,首先需要了解查询和数据访问的方式。1、限制条件比如最大文档大小为16MB。2、查询和写入的访问模式通过了解查询的运行时间和频率,识别出最常见的查询,一旦确定了这些查询,就应该尽量减少查询的数量,并在文档设计中确保一起查询的数据存储在同一个文档中。这些查询中未使用的数据应该存放在不同的集合中。需要考虑是否可以将动态数据(读/写)和静态数据(读)分离开。在进行文档格式设计时,提高最常见查询的优先级会获得最佳的性能。3、关系类型根据业务逻辑、文档之间的关系来考虑哪些数据是相关的,确定使用嵌入还是引用。需要弄清楚如何在不执行其它查询的情况下引用文档,以及当关系发生变化时需要更新几个文档。还要考虑数据结构是否易于查询。4、范式化与反范式化范式化是指数据分散在多个集合中,在集合之间进行数据的引用;反范式化会将所有数据嵌入单个文档中;如何选择范式化与反范式化,范式化的写入速度更快,而反范式化的读取速度更快,因此需要根据应用程序的实际需求进行权衡。5、内嵌数据和引用数据对比更适合内嵌数据更适合引用数据较小子文档较大子文档数据不经常变更数据经常变更数据最终一致即可必须保证强一致性数据通常需要二次查询才能获得数据通常不包含在结果中快速读取快速写入6、优化数据操作优化读操作通常包括正确的索引和单个文档中返回尽可能多的数据; 优化写操作通常包括减少索引数量、尽可能的提高更新效率;通过删除旧数据进行优化:第一种方式是通过固定集合实现;第二种是使用TTL集合。TTL集合可以更精确的控制删除文档的时间,但在写入量过大的集合中操作速度不够快,通过遍历TTL索引来删除文档。第三种方式是分库分表。每个月的文档单独使用一个集合。这种方式实现起来更加复杂,因为需要使用动态集合或数据库名称,可能需要查询多个数据库。九、小结MongoDB从4.0开始支持事务了,MongoDB 是一个 ACID 数据库。它支持原子性、一致性、隔离性和持久性。虽然 MongoDB 支持事务,但在使用它时仍然要谨慎。事务是以性能为代价的;了解MongoDB了如何使用事务以及它的参数配置方案,达到即插即用的效果;对MongoDB查询数据的过程,有了更深层次的理解;领悟了MongoDB连接池的意义;深刻理解了MongoDB文档格式设计思想;总结了MongoDB读写的优化方式;

0
0
0
浏览量333